Hacker News new | past | comments | ask | show | jobs | submit login
A Solution of the P versus NP Problem? (arxiv.org)
662 points by fahrbach on Aug 14, 2017 | hide | past | favorite | 286 comments



This is definitely one of those "I'm going to wait for the peer review" claims, but it is pretty exciting.

Pros: The author is not a dilettante, and is actively researching in the area (http://theory.cs.uni-bonn.de/blum/Forschung/forsch.var)

Cons: It's not my area, but I was expecting something a little more novel for a solution to P?NP. This almost seems too simple (it might almost fit in a margin...).

Could be pro or con: Single author. It's becoming rare for important new work to not have multiple authors, especially from professional academics. However, Andrew Wiles...


The paper looks like it is 38 pages and builds on other work. That doesn't seem too simple to me.


I tend to agree with GP that this is suspiciously simple. I've only skimmed it, but the novel part of the proof appears to only be Thm 5-6 which is less than 10 pages, and it's not especially dense writing. So this would be a relatively simple proof. Moreover, the technique used appears to be rather incremental over known techniques, so it's surprising it would be strong enough to prove PvNP which is so far away from the frontier of known techniques.


The known technique was a proof of an exponential lower bound on something extremely similar to an NP-complete problem. That moved the frontier of known techniques a lot closer to P vs NP.


The author's other publications:

http://theory.cs.uni-bonn.de/blum/PubBlum.var


+1 like the reference to Fermat's ghost here and all those connotations in that short ponder.


> However, Andrew Wiles...

Grigori Perelman also comes to mind.


I would rather tell "Shinichi Mochizuki"


A list of 116 previous "solutions" to the P versus NP problem: http://www.win.tue.nl/~gwoegi/P-versus-NP.htm


After skimming its interesting that the majority of the proofs in your list claim P equals NP. I would have guessed it would be more common for proofs to claim the opposite because P != NP makes sense intuitively. That being said it would certainly be more exciting if P did equal NP.


A simple but wrong proof of "P = NP" is easier to write in some ways, since you "just" need to provide a single algorithm for one NP-hard problem, and show that it runs in polynomial time. It looks like many or most of the proof attempts in that list take this form.

A plausible proof of "P != NP" won't be quite as simple to express, since it needs to prove that all such algorithms do not run in polynomial time.


> A plausible proof of "P != NP" won't be quite as simple to express, since it needs to prove that all such algorithms do not run in polynomial time.

That sounds hard but, If for any NP-Complete problem there exists no P solution then for all NP problems there is no P solution. So this proof sounds like it has the right shape.


Well yes! Two things: You have a successful algorithm that runs in P time that solves an NP Hard problem and 2) you can map other NP Hard problems to your problem. Without the second factor, it is only a demonstration of a "range" in the computational realm in question, where p = np or whatever the declaration. Being able to show that your pizza slice is actually an ocean of pizza, and also show that any other shape of pizza slice can be appropriately transformed into the shape you have means p = np. a "solution." Or perhaps better put, it is a funnel through which complex computational patterns can either be reduced or simplified or elegantly correlated, approaching absolutely perfect parallelization of operations. This is just one way to look at it, but essentially intractability is an interesting term to consider.

Please forgive me if my liberal use of the language is an offense


Your 2 is the easy part: "NP-hard" is exactly the set of problems to which any NP problems can be reduced in polynomial time, and there are many known existing examples, both within NP (aka NP-complete) and outside it.


So part of the solution set satisfies P=NP and some satisfy P!=NP?


No, some NP-hard problems are outside NP (i.e. harder than NP), while some are inside. The wikipedia article has a good explanation and a good diagram: https://en.wikipedia.org/wiki/NP-hardness


Thanks many for your kind explanation. It makes more sense now. I forgot that important detail that mapping is pretty easy as most involvements reduce to and from 3-SAT


The funny thing is that all these "solutions" are implications but not proof, and one contrary demonstration would actually render them all irrelevant as far as I understand the theory of computational p v np time complexity


Scott Aaronson on "suppose someone sends you a complicated solution to a famous decades-old math problem, like P vs. NP. How can you decide, in ten minutes or less, whether the solution is worth reading?": http://www.scottaaronson.com/blog/?p=304


FWIW, my assessment is that this paper passes the Aaronson test, as well as my own personal bogometer. If I were a betting man, my money would be on "the proof will turn out to have a flaw, but it will be hard to find the flaw, and the finding of the flaw will be interesting in and of itself, and possibly even advance the field."


And even though it may be hard to find the flaw, it will be easy to verify that it is a flaw.


If only we had a way to make finding the flaw as easy as verifying the flaw


I don't think it would be possible to make finding it as easy as verifying it, but how can we prove that?


Finding flaws in proofs is as easy as verifying them. After all, finding a flaw amounts to just checking a proof. Finding a proof is the truly hard challenge.


Checking a proof isn’t necessarily easy, of course, in the sense of both human hardness—such as requiring a lot of background, using dense notation, or being very long—and computational hardness, such as using a logic whose decision procedure takes exponential time.


What kind of esoteric logic yields proofs that require exponential time to check?


Proof checking in dependent type theory is nonelementary. Of course, this is a completely artificial result with no real world consequences, but that's complexity theory for you...


Well, I'm sure we can all agree that the mathematical paper we're discussing could be formalized into a proof that could be checked in linear time.


That's only obvious in a vacuous sense.

As a PhD student in Programming Languages, can I request a reference?* Nobody managed to provide one on MathOverflow, and that's the StackExchange for professional mathematicians:

https://mathoverflow.net/q/226966/36016

Of course, your claim is vacuously true, since you can add a trace of all steps of a proof verifier. It's also vacuously true because you can add a dump of the Internet to the proof. Neither thing is insightful.

> what we normally think of as an "honest proof" is a series of steps each of which follows from the previous by application of one of a finite number of axioms or rules of deduction, and such an "honest" proof is thus checkable in linear time by checking each step

Also, for the record: actual formal proofs for (say) standard ZFC are exponentially bigger than anything you want to work with.

EDIT: To clarify, I didn't mean to imply I'm some authority, just to suggest I'm not so obviously* an idiot. Which was maybe stupid anyway.


Well, I've spent a lot of time with theorem provers as well as with normal mathematics. I've never experienced an exponential blowup when formalizing mathematical ideas. It would always boild down to step-by-step verification. And I don't see any reason to suspect that the paper under discussion uses non-standard techniques.


Wow, this got more replies than I expected. I was partly kidding, in that I don’t think it’s a huge concern for humans, but for example the tableau decision procedure[1] for modal logics such as epistemic logic is EXPTIME-complete[2].

[1]: https://en.wikipedia.org/wiki/Method_of_analytic_tableaux

[2]: https://arxiv.org/abs/0808.4133


Someone should research and write a paper about it.


But then how will we know that paper is correct?


I'm pretty sure that the answer to that question involves a cat.


Sounds hard.


no problem


NP-hard.

/dodge


That was the joke. :-)


This is getting a bit too meta for my tastes :')


By using the algorithm!


My favorite comment. By far.


This lines up perfectly with most software bugs. Hard to find, but once you have them easy to prove they are the cause of the problem. The difference is that in software bugs are usually fairly easy to fix, a math proof with a bug in it might not be fixable at all.


I think you missed the joke


After overlooking the paper and applying Scott Aaronson's criterion I bet against you and bet:

- The proof will turn out to have a flaw

- The flaw will not be that hard to find (though probably not completely trivial; but rather of the kind: it takes much time to go to the details of the proof arguments)

- The flaw will not be interesting in itself and will not advance the field


Seconded. The flaws in these papers are not hard to find. They usually amount to misunderstanding of definitions or the results they cite.


Unlikely considering the author of the paper. I'm basing that on other comments on the thread which this section seems to be missing, e.g.:

> ...this guy is an established senior researcher at the University of Bonn


Even experienced people can make "silly" mistakes. https://arxiv.org/abs/1612.04208v1


They can—but that doesn't mean we should assume it's a silly mistake (as opposed to a deep one) without more information than that he's an accomplished researcher.


To add to this, I have a Ph.D. in applied mathematics. I am also an accomplished researcher, albeit retired.

I make more mistakes by 09:00 than most people will make all day. Hell, my girlfriend would suggest that we can move that time forward a couple of hours.


"Yet here’s the bloodied patient, and here we are in the emergency room." "The lemma encrusted shoulders of giants."

Aaronson on writes very well.


"At some point, there might be nothing left to do except to roll up your sleeves, brew some coffee, and tell your graduate student to read the paper and report back to you."


I'd rather he applied his test to a bunch of papers for validation and showed us a confusion matrix.


A little too flowery. He's also needlessly political.


Are you one of those people who equates any mention whatsoever of Trump as "being needlessly political"?


No, but this one was.


Could you clarify what was needlessly political? I can't seem to find anything political there at all.


Injecting a controversial (and mostly detested here) political figure into an unrelated topic.


I meant actual examples. But on second read I assume this is about the blog in general and not this post in particular that doesn't seem to have any Trump references.


Not at all. But bringing him up in an unrelated post on a technical blog just screams "virtue signaling" to me.


When you feel strongly about something and write about it, you aren't virtue signalling, you're expressing your view. If you have any reason to take the rather uncharitable stance that his views on Trump are inauthentic, then post them, otherwise it's better just to assume that people who say things you disagree with actually think those things rather than are just saying them to look good.

Accusing others of "virtue signaling" is a much clearer example of "virtue signaling" in my opinion, since it's a particular population that tends to do it, and is used primarily to put down those they disagree with and mark them as belonging to a different group rather than to actually make any kind of meaningful point.


I'm curious what I said that sounded to you like "his stated views on Trump aren't authentic."


https://en.wikipedia.org/wiki/Virtue_signalling

    Virtue signalling is the conspicuous expression of moral
    values done primarily with the intent of enhancing
    standing within a social group.
I don't think Aaronson states his views on Trump with the hopes of increasing his social status. He does it because it's just what he feels and wants to express himself.


And which part of that definition has anything to do with the views being inauthentic?

I feel like I'm taking crazy pills here.


When you express a view that you don't hold in order to receive social approval, or when you express a view that you hold lightly with more intensity and frequency than you would except for your desire to receive social approval you are being inauthentic.

The whole value of the phrase 'virtue signalling' is that it accuses those you disagree with of inauthenticity. If I'm saying a thing I genuinely believe because I genuinely believe it, I'm not virtue signalling. It's only if the reason for saying the thing is to gain social approval that it's 'virtue signalling'. Any assumption that your opponent is doing something for this reason is uncharitable, and kills rational debate. It's essentially an ad hominem attack.

It's also ignoring the fairly detailed entry that Scott posted explaining in his own words why he started talking about Trump in his blog: http://www.scottaaronson.com/blog/?p=2777 when previously he'd avoided the subject.

Maybe it is virtue signalling, or maybe he really did feel as he claimed, that he had a moral responsibility to speak out against Trump. If you assume that it is only 'virtue signalling', you are cutting yourself off from engaging with his points.


You're correct.

Virtue signalling is usually done by people who sincerely believe the values they espouse - but they emphasize or show those values in order to gain or reinforce social status within the group. It usually is done by proclaiming things you hate, rather than things you like.

If the comment adds nothing besides an "I'm with you guys, that's the worst", it's fair to see and describe it as virtue signalling.

And yes, it happens on both sides of the aisle - heard lots of conservatives proclaiming their distaste for Obama over the last decade.


>virtue signaling

You know that was an interesting and useful concept before you guys decided to turn into yet another empty insult to fling at people.


It's still an interesting and useful concept even if it's also used as an insult. (Much like you might refer to a colleague's tight new shirt and gold chain as a "mating display" without disparaging biologists' use of the term.)


I mean he also pokes fun at his children, himself, and posted a rap video about theoretical computer science on his blog. His blog is not exactly the Annals here... I think he is more than welcome to write about whatever he damn pleases and if you disagree you are more than welcome to argue in his comment sections (and more than likely lose)


Of course he's welcome to write about whatever he likes. It's his blog.

But I'm welcome to roll my eyes at it.


I'm rolling my eyes at your eye rolling! How deep can we go?!


In what world is Scott Aaronson, a professor whose primary research area is computational complexity, unrelated to a P=NP solution?


I was saying that President Trump is unrelated to the usual topics on that blog.


Proof for the Gaussian correlation inequality was

- Written in Word

- Uses techniques just seem too wimpy for the problem at hand.

- Was published in a predatory journal


For those curious about the details of this, I found this interesting article: https://www.quantamagazine.org/statistician-proves-gaussian-....


Hence why Scott explicitly takes time to address that there will surely be outliers on both ends


I just wanted to point out that's is not a theoretical possibility, but actual ground breaking work did look sketchy as hell.

The "The nice thing about math is that sooner or later the truth comes out" bothers me. The knowlege in Archimedes palimpsest was lost so long the eventually doesn't seem to be much comfort.


Grassman was another more recent example, though obviously we can't now find recent examples that will lie dormant for millennia...


When we trust Academia to write their own history books, the theme is always "sooner or later the truth comes out."


See also the more P-vs-NP specific version he put out later: http://www.scottaaronson.com/blog/?p=458


"At some point, there might be nothing left to do except to roll up your sleeves, brew some coffee, and tell your graduate student to read the paper and report back to you."

So there's your real answer.


Aaronson has commented on this particular proof. He's, to say the least, skeptical, based on an observed polynomial circuit in section 7: http://www.scottaaronson.com/blog/?p=3389


Points number 6 and 10 may apply. I only skimmed the paper and may have missed the really interesting point, but to me the argument and method just looks too simple to have been missed by everyone for all the time. But I am also totally not an expert in the field.


Aarsonson has since commented on this paper on his website. He still believes that it's a bogus proof.


The question is, how can you do it in polynomial time? :)


Interesting that this is the second P!=NP proof from a University of Bonn researcher. Other one, by Mathias Hauptmann, is here: https://arxiv.org/abs/1602.04781

I never did hear the status of Hauptmann's proof (I'm not connected to academia so only know what I've read on the internet), but given it's been over a year without word, presumably there's something flawed.

I might not get too excited over this proof, either, until another member of the TCS community can vouch for it. There have been many serious-looking attempts at PvNP that turn out to have fundamental flaws.


Also interesting: if you look at the acknowledgements of Hauptmann's paper (at the end, just before references), he says, "I would like to thank Norbert Blum for carefully reading preliminary versions of the paper, for helpful remarks and discussions, for his guidance and patience and for being my mentor." This means Blum knew about this past attempt and presumably thought it correct enough to put on arXiv. I assume from the current paper that he has changed his mind since then.


How does the paper of Hauptmann prove P != NP?

Sigma_2^p != NP as far as I know and after a brief skimming the paper does not mention P != NP.

Edit: the paper does indeed mention P != NP in the form of P != Sigma_2^p => P != Sigma_1^p = NP. Please disregard my comment.


P = NP implies the polynomial hierarchy collapses, thus P = Sigma2, so by contradiction P != NP.


I would normally sigh and move on seeing such a claim, but this guy is an established senior researcher at the University of Bonn. A career-ending disaster or instant and eternal fame, that's some serious cahunas.


It shouldn't be career ending unless there's been malfeasance or some kind of sloppy work. The people who go down hard usually have some crime beyond audacity.

It could easily cause some painful embarrassment, but hopefully that would pass with time.

Maybe embarrassment similar to that faced by researchers who's results suggested FTL communication, but it turned out to be a bad fiber-optic cable. I didn't follow up but I assume that team is doing OK.


I wouldn't be surprised if they gained a bit of respect. If I recall correctly they did everything right - they had an apparently impossible result, they spent a lot of time trying to disprove it, and when they announced they made it clear that they doubted the result. Hiding it because they were sure it was false, but couldn't disprove would have been poor science. It had to be pretty scary to make that announcement knowing there was almost no chance it wasn't going to turn out to be a problem in their setup and I admire the guts it took to do it.

Sometimes the courage of your convictions means trusting the process to get it right when you're sure you're wrong.


Actually, there was a bunch of blow back and the main researchers behind the result stepped down.

https://www.newscientist.com/article/dn21656-leaders-of-cont...


Reading that, it seems more like they went back to doing research full time instead of being part time administrators -- and probably happier for it.


Agreed. Even in the event of the discovery of work which would qualify as minimally sloppy, I'd think his reputation (if not wanting of some polish) would still be intact after a quick retraction.


I mostly agree but there is a small effect: "Claims spectacular proofs with flawed arguments" might still be reason to doubt his future proofs more until verified. And mathematicians do use such criterions before reading unverified claimed proofs of hundreds of pages (not uncommon). And since some proofs target tens or hundreds of persons, that can be a problem. I've read such stories, though I don't have a reference at hand.


> serious cahunas

I think you mean "cojones" (which btw is a very rude word in Spanish). A kahuna is a kind of Hawai'ian shaman.


I stand corrected, thank you. I would normally have said "serious bollocks", but this forum is mostly left-ponders who would probably not have caught my drift.


North American here, we hear enough Brits speak to get the expression :)


"serious bollocks" means "very questionable statement" in the British English I'm familiar with. Perhaps GP's idiom is from elsewhere. If the reference is to bravery/daring one would use "balls" as in US English, I believe.


"Bollocks" in Britain and "Balls" in North America both refer to the Testes, as does "cojones". So all three statements are actually the exact same slang in regional dialects.

Bollocks, like many slang words, have multiple meanings in different contexts. It can be used as you stated as well, but "What you said is bollocks" and "You have bollocks for saying it" are very different statements.


I'm sitting here chuckling to myself about the "why not both?" possibilities.


It's a common mistake - http://itre.cis.upenn.edu/~myl/languagelog/archives/003526.h... - and you could imagine a big kahuna would have to have serious cojones.


> A career-ending disaster

Why? Why should posting a stab at a complicated problem should be considered career-killing?


Some people will hold it against him because some people are too stupid to see things in context. Rather than respecting him for taking a smart risk in pursuit of an important goal, they will just remember that he was once wrong about something and in a big way.

There are people who point at others' failures to bolster their own image by comparison. And there are people with fragile egos who are eager to see others in a negative way.

It shouldn't be that way, but it sometimes is, so I wouldn't say the career risk is zero. But I'm still glad he's doing it regardless of the outcome.


Is this really how the academic community is?


That's really how humanity is.


Goes to show that even the even the so called "experts" are humans after all.


It's career "damaging" (I wouldn't say killing) because so many crackpots have attempted this problem that making a wrong attempt makes you look a bit like a crackpot.


I think he means if the stab is not serious but a troll. Of course, if it's a genuine and serious attempt, I don't think it should be frowned upon at all.


Seriously, best way to ensure people stop trying is to persecute attempts.


What he said.


I don't think thats true. See Vinay Deolalikar.

https://en.wikinews.org/wiki/Researcher_claims_solution_to_P...

https://arxiv.org/find/all/1/all:+AND+Vinay+Deolalikar/0/1/0...

He is still publishing papers. Seems to be doing fine.


There's a third possibility: there's a flaw in the proof, but one that is hard to find, and the discovery of the flaw actually advances the field in interesting ways.

If I were a betting man that's actually where I'd put my money because the paper passes by bogometer test (but I am nowhere near qualified to assess whether it's actually correct).


Wasn't that the first possibility?


> A career-ending disaster

He is a tenured professor in Germany, there is very little that could end his career (essentially refusing to honour his teaching obligations or being convicted of a felony). At worst, this will be immensely embarrassing, but you can't kick professors out just because they make a fool of themselves.


And professors don't just make a fool of themselves because one of their papers had a flaw in it.


You have no idea...


If you think this is career ending you probably have no idea on how hiring and promoting decisions are made in German universities.


Please can you enlighten me?


Just a half minute skim shows the author claims it passes the natural proof barrier but makes no claim about it being non-relativizing or non-algebraizing.


For those following along at home: this is important because we have a proof that relativizing wouldn't work. In this context, relativizing means relative to an oracle. For example, P^A means "just like a Turing machine would regularly recognize languages except this one has a magical oracle that can recognize A in one step". A can be arbitrarily complex, including NP-complete ones like 3SAT.

This is important because the Baker-Gill-Solovay theorem already demonstrates that there exist oracles A != B relative to which P^A = NP^A, but P^B != NP^B. This shows that the problem has contradictory relativizations, and hence can't be proven that way. This matters because it's a litmus test against quack proofs.

I don't think this is a problem here; the proof doesn't appear to be doing that.


I first learned about this because there's a decades-old joke in NetHack that cites this result when you ask for a major consultation from the Oracle when you can't afford to pay for it. I think the joke is that in this case, the Oracle can't tell you anything that's useful to you. :-)


AFAIK boolean circuits proofs generally don't relativize, so there's no need to discuss that.


I like the straightforward title. I know that it is politically correct to christen your paper solving e.g. the Poincare conjecture like e.g. "Ricci flow with surgery on three-manifolds", but all rules are there to be broken once.

I wish the author best of luck.


At least he couldn't resist making the important point a corollary.


what else, a mere remark? a theorem?


I was actually disappointed that the title was obfuscated. I would have named it something like "P does not equal NP" or if I wanted to hedge my bets a bit more "A proof that P does not equal NP"


I think you could call this academic clickbait. I know I clicked to find out what his conclusion actually was.



I see a few typos in the wording of the paper (e.g., "spezify", "touchs", etc.). While this doesn't mean much, I would expect if the paper had gotten a fine-toothed comb review that these sorts of typos would have been caught. Moving back to the "not holding my breath" stance unless there start to be indications from experts in the field that the claims are holding up.


I can forgive imperfect English. He's not a native English speaker and his reviewers probably are not be either.


The paper is on arXiv, so we don't know if it's gotten a fine-toothed comb review yet. Furthermore, some reviewers seem to ignore those kinds of typos, which means a paper might have been reviewed carefully for the technical contents, but not at all for language.


It's on ArXiv, people are free to publish there without getting a language review. The author's first language is not English.


Can someone ELI5 what this problem is, how likely the proof is to hold up to scrutiny, and whether P != NP follows?


Here is a largely correct ELI5: P != NP asks the question "Are problems that are easy to verify (NP) also Easy to solve? (P)".

Note that the reverse is obviously true: problems that are easy to solve are also easy to verify.

Here is an example: Take the problem "Find minimum of 5,6,7,8". You solve the problem and tell me that the answer is 5. I can verify your answer by solving the problem myself, getting the the answer 5 and comparing it with your answer. So we can conclude "Problems that are easy to solve are easy to verify" In other words, P ⊆ NP.

_Now is the reverse true? Are problems that are easy to verify also easy to solve?_

Let me give you an example. Let us assume that the question is "Is 1053188576519689 prime?". You come back and tell me, "No it is not prime, it is divisible by 32,452,867".

1) It is easy to verify your solution. I can divide 1053188576519689 by 32,452,867 and verify that it is indeed divisible. 2) It is hard to solve the problem, I have to try out numbers from 2,3,...,sqrt(1053188576519689), which is quite painful. (Or maybe there is as yet undiscovered better algorithm). So it appears that problems that are easy to verify may not be easy to solve. Or it appears that NP ⊆ P is not true. In other words, it appears P != NP (because if P ⊆ NP and NP ⊆ P, P == NP).

NP problems have wide ranging applications in things like cryptography for example. Let us assume I have a hashing technique. It is easy to hash a document, but hard to reconstruct the document from the hash. Then this technique can be used in auctions where you do not trust the auctioneer. You publicly submit the hash of your bid before the deadline. You do not submit your bid itself, because you are afraid that the person handing out the contracts will reveal the number to his brother-in-law who will bid $1 more than you and win the contract. After the deadline is passed, you send your actual bid to the Auctioneer.

Now 1) Everyone can verify that the documents have not been altered (the hashes are posted publicly, each document can be hashed and compared with its publicly posted hash). So it is easy to verify that the documents have not been tampered with after the deadline.

2) Nobody can construct the document from the hash. So it is not easy to solve for the bid document given the hash. So everyone can post the hash publicly with confidence before the deadline.

If P != NP we can have this type of auctions. If P == NP then there is no difference between posting the hash publicly and posting the document publicly.


(You may have omitted this detail on purpose, but I think it's worth pointing out) The example of trial division as a primality test is a good illustration of an algorithm that takes a lot of work to run but whose output is easy to verify. However, the problem of primality testing is actually in P. [1] You have to use a fancier algorithm than trial division in order to get polynomial time.

[1] https://en.wikipedia.org/wiki/AKS_primality_test


another detail that the GP most likely left out on purpose is that the fact that easily checking divisibility means that primality is co-NP. Primality being NP is actualy non-trivial (even before AKS). see wikipedia for more details: https://en.m.wikipedia.org/wiki/Primality_certificate


This is a good example of Knuth's argument that some P=NP proof by non construction or massive polynomial could play out

https://cs.stackexchange.com/questions/23260/when-is-the-aks...


I want to add another important aspect, the aspect of NP completeness. There is a bunch of problems considered "NP complete" and they are all related such that it is easy to translate one problem into another (easy as in "quickly").

This means, first: If P == NP, then all of these problems become easy, and second: if P == NP and we find an algorithm that solves only one of the NP complete problems quickly, then this algorithm can solve all algorithms quickly.

Reversely, if now this paper's proof is correct so P != NP, then there is no algorithm that solves any of these problems quickly.


NP completeness kinda works around the uncertainty of PxNP (with x ∈ {⊆, ⊊}), because it defines some sort of "weak subset" of NP comprised of "pretty sure these problems are not in P[, because no one yet thought of a polynomial reduction to a problem in P]". The last part in brackets is the catch here; if we could show that it is not possible, then P!=NP would immediately follow, and NP completeness would become a largely pointless exercise.


NP completeness would still be of interest for particular algorithms because it would prove that those problems could not be solved in polynomial time. For instance, if factoring was shown to be NP complete, that would be a really useful result both for showing the security of algorithms like RSA as well as potentially disproving the extended Church-Turing thesis if quantum computers can be created.


Also wanted to say:

People have been taking a go at this for many years now. Grapevine says that several large CS departments in many countries have groups of graduate students devoted to solving sub-problems of the entire proof because it is a prestige issue. Extraordinary claims require extraordinary proof and any proof will go through multiple peer reviews.

Perelman's proof of Poincare Conjecture was studied for several months before being declared true (~3 years) and that was considered "fast". https://en.wikipedia.org/wiki/Grigori_Perelman#Perelman.27s_...


No TCS grad student I have talked to has had this experience...

Almost everybody in the field knows we're far from an actual proof with current techniques.


I should have been more precise. By problem, I was referring to whatever, "Berg and Ulfberg and Amano and Maruoka have used CNF-DNF-approximators to prove exponential lower bounds for the monotone network complexity of the clique function and of Andreev's function," means, not "P ? NP."


Let me take a stab at explaining that, although I'm not really an expert and probably misunderstanding something. Hopefully someone else will be able to correct me where I'm wrong.

The clique function takes a bit string representation of a graph with m nodes (one bit for each pair of nodes, 1s where there is an edge between two nodes) and outputs whether the graph has a fully connected subgraph of size s (a clique).

CNF and DNF are conjunctive and disjunctive normal forms, respectively. CNF has the form (x OR (NOT y) OR ... ) AND ((NOT z) OR y OR ... ) AND ..., while DNF has AND and OR exchanged. Any boolean function can be expressed as CNF or DNF, but this might blow up its size exponentially.

The monotone network complexity is the number of binary {AND,OR} gates you need to compute a function. Monotone because increasing the input (setting a bit to 1) never decreases the output. The basic gates have this property, and if you never use NOT the composition has it too. This means that monotone networks can only compute monotone functions. The clique function is monotone, since adding an edge can never destroy an existing clique.

The CNF-DNF-approximators mentioned are a technique for creating a CNF (or DNF) of such a network by introducing a limited amount of errors (hence approximator) at each step. This is done by switching between CNF and DNF at each gate, but discarding parts of the formula that get too large. (I don't really understand how that keeps the error bounded.)

Using the properties of the clique function, it is possible to show that the total number of errors by a limited CNF-approximator must be large, which means that the switching procedure must have been applied many times. This gives a lower bound on the number of gates in any monotone network that computes the clique function, and this bound is exponential.

The paper under discussion attempts to extend this result to non-monotone networks, which can also make use of negation. To do that, it extends the CNF/DNF-switching to also handle negated variables without introducing significantly more errors. (Again, I don't understand how that works.)

Assuming the extension is correct, any bound on the monotone network complexity using CNF-DNF-approximators also holds for the non-monotone network complexity of the given monotone function.

Applying this to the exponential lower bound of the clique function, this means that there is no non-monotone network of a polynomial number of gates that computes it, which implies that there is no polynomial-time Turing machine, which implies P != NP.


This is a bit of a mischaracterization of secure hashing.

You can not uniquely reconstruct a document from a hash value. Indeed, for a typical secure hash function, a given hash value maps to an infinite set of inputs. But because secure hashes are one-way functions, finding any of those inputs for a given output requires brute force. This is why the bidding example works - it's so hard to find any input that hashes to a value that even having one of them is reasonable proof that it was the source document.

*Consider a secure 1024-bit hash function. How many 1025-bit inputs map, on average, to each hash value?


Scott Aaronson has done some really good write-ups about P-NP, including what it means and what avenues of proof are possible. I'd start here: http://www.scottaaronson.com/papers/pnp.pdf. It probably doesn't count as "ELI5", but it's something.


That's a paper worth reading even if you don't understand it all because it will give you a good feeling for the magnitude of the problem. But it's actually surprisingly accessible. Aaronson is quite good at making hard things (relatively) easy to understand.


Here's a great video on the subject: https://www.youtube.com/watch?v=YX40hbAHx3s



After skimming the paper, the methods used look quite basic. There seems to be no really deep new insight as far as I can tell. Let's see what experts will have to say.


An insightful interview to give perspective. http://www.se-radio.net/2017/07/se-radio-episode-298-moshe-v...


If established researcher in the field makes a breakthrough of this magnitude I would expect rumors to start to circulating first.

Showing the draft to few colleagues to see if they can spot mistakes before 'shaking the world' is probably a good idea.


Hmmm, maybe he analyzed the game theory:

If he publishes without consulting peers:

  If his proof is correct, he gets unending fame, millions in prize money.

  If his proof is laughably flawed, he'll promptly be forgotten as one of the 100s who have been wrong before him.
If he consults his peers:

  If his proof is correct, he may end up sharing credit, maybe they'll even publish his work quietly under their own name while he's still waiting for feedback. Maybe that is what we are reading now.

  If his proof is laughably flawed, then they'll give him his feedback and only his peers, instead of the whole internet will laugh at him for a day, before it's all forgotten.
Seems like with any tiny chance of a correct proof, the dominant strategy is, by far, to publish without consulting your peers.


Entirely possible: The paper's author has actually given lectures on game theory.


What are the implications of solving the P versus NP problem? What practical effects would that have? Not trying to belittle the problem, just curious as an outsider.


The most interesting thing is if P=NP. If that's the case, that means that there is an algorithm that can solve any NP problem in polynomial time. This means that things like crypto would be able to be cracked in polynomial time which presents a huge problem for security.

We basically operate under the assumption that P!=NP currently. Validation that this is true doesn't really change much. I can't speak to how this may help a academic researcher in CompSci, but it probably doesn't change much for most programmers.


I think it's even less useful, even if P = NP it is possible that no one finds an algorithm. Creating a (useful) algorithm is independent of proving the theorem. Also interesting is that someone could create an algorithm that solves NP complete in polynomial time without proving P=NP. They would be unable to prove the algorithm correct though.


>> even if P = NP it is possible that no one finds an algorithm.

If someone could prove P=NP but no one could find an algorithm. That would be incredibly funny in some sense. Like a huge joke played on us by the universe.


Alternately, a polynomial time algorithm is specified and the exponent is, say, 2500. Mathematically huge, practically useless.


Actually, we already know an algorithm that solves an NP complete problem in polynomial time iff P = NP. We can just enumerate all programs and ask them to generate witnesses. As we can decide whether a witness is valid and there is a program that always outputs a valid witness if P=NP, we can find that program within finite steps and the algorithm is correct.


Correct me if I'm wrong, but enumerating all programs doesn't seem polytime


Formally, we run all possible programs in parallel: the 1st program runs for 1 step; the 1st and 2nd programs run for 1 additional step; programs 1-3 run for 1 additional step; ... After program stops, we check if its output is our witness. If there is program number K running in time f(n), then our program runs in time O(K * f^2(n) * [simulation time] * [check time]), which is polynomial if f is polynomial.


There is Universal Search algorithm. If P=NP, it finds a solution for solvable 3-SAT in polynomial time. (still not solving 3-SAT itself in case we will not be able to determine running time, but in cryptography AFAIK we usually need solutions for known-solvable instances)


This is absolutely correct. Proving P=NP doesn't magically create the algorithm, it just proves that there is one.

However, the reason why I chose to single out crypto specifically is because it has the most to lose if that algorithm exists. Our current methods of encryption become unsafe regardless of whether the algorithm is known or not. I don't think you can claim that your encryption is secure if there is an algorithm that can crack it in polynomial time, regardless of whether the algorithm is known or not.


> Proving P=NP doesn't magically create the algorithm, it just proves that there is one.

This is 100% true for all practical purposes. But there is an explicit algorithm for NP-complete problems that runs in polynomial time iff P=NP. The Wikipedia page has it written down. https://en.m.wikipedia.org/wiki/P_versus_NP_problem


It is my understanding that P = NP might have serious implications for quantum physics (ie, quantum physics suggests that P shouldn't be equal to NP).


Nah. That "result" circulated a few years ago in the Internet, but it was mostly a mix of hand waving and hidden assumptions. It's not a mainstream result, and I doubt that it will be confirmed someday.

It's interesting because it mix two interesting topics, that are well known in the popular science forums, but are very technical and most people don't want to read all technical the details of both.

Relevant xkcd: https://xkcd.com/1240/


On your second point, a construction which transforms NP-complete problems into P problems in polynomial time necessarily implies P = NP, by construction. If the algorithm isn't correct, then it's a heuristic, not an algorithm, and we have piles of heuristics for working with NP-complete problems already.


I'm pretty far out of my area of expertise, so I'll take your word for it. Could an an algorithm be correct, but not proven correct, or is it defined as a heuristic at that point?


Pretty much everything programmers do is create unproven algorithms. Hopefully at least some of them are correct. A heuristic is a problem-solving approach that has a chance of failure. So say you need a pretty fast dictionary, and you can take a relaxed attitude to correctness. You can take a hash of the words of your dictionary, and then as you are filtering some dataset, you can always say in a short time whether a value is definitely not in your dictionary, but your positive solutions have a percent chance of being incorrect (e.g. a hash collision). The idea is generally to make a problem tractable that would otherwise be computationally difficult or impossible. I'm not sure if that quite answers your question.


An algorithm could certainly be correct (in the sense of always giving the correct result) without being proven to do so (or even without any such proof existing, in some particular formal system). You are not wrong about that.


Godel's incompleteness theorem would seem to imply that is possible unless I'm misinterpreting it. I have a rather elementary understanding.


The Collatz conjecture ( https://xkcd.com/710/ ) may well be an example here.


If P=NP, then finding an algorithm is, by certain definition, exactly as easy as proving an algorithm correct! This is, after all, the gist of what P vs NP means. And this is why it's such a huge deal - the problem is almost self-referential in nature.


> This is, after all, the gist of what P vs NP means. And this is why it's such a huge deal - the problem is almost self-referential in nature.

Could you back that up with some citations? This doesn't ring true. But my pure CS has withered a bit...


For example, algorithm that just copies it's input to output, unless input is proof of inconsistency of arithmetic, is correct algorithm to calculate function f(x) = x - yet it's unprovable in PA that it's correct.


This is correct. And, in fact, P=NP.

However, while P=NP, the algorithm (oracle) resides on the other side of the event horizon. This is called the MAD paradox.

https://nerdynotmad.com/p-equals-np/

The submitted proof will be found to be incorrect.


I haven't the foggiest idea what that "proof" is supposed to mean, I can't follow any step in it. What is MAD in this context?


Would it present a problem for cryptography though? Quadratic algorithms are seen as already slow, cubic algorithms as unbearable garbage you best try to avoid if your data size is not tiny. And this completely ignores constant factors that can play a big role. Homomorphic encryption for instance fails to be usable just on a huge constant factor.


Yes, it would. The adversary in cryptography is assumed to be worst-case evil; it'll do whatever it can to break your algorithm, as long as it's performing a polynomial amount of computation. Remember, you're not just fighting your friendly neighborhood hacker, but possibly entire nation-states.


You can easily have crypto that can't be broken with all the resources and time in the observable universe even if it's "just" polynomial with a sufficiently high coefficient.


Yes, but it becomes significantly more expensive. It might work but it is a huge difference in cost tradeoffs.


Beyond crypto, problems like the traveling salesman would also be solvable in polynomial time.


Well... The traveling salesman decision problem, not the general case.

The problem of finding the optimum path is not in NP, if I give you a candidate solution you can't easily check if it's the global optimum.

What is in NP is the decision problem, finding a path that is better than a given bound. If I hand you a candidate solution, you just have to compare the sum of the distances to the bound to check it.


If you can solve the decision TSP in polytime you can solve the optimization case in polytime.


How?

edit: The wikipedia mentions that the TSP problem is NP-hard and explicitly says that the decision version of this problem is NP-complete. My assumption is that if the optimization version was proven to be NP-hard there would be no need to explicitly mention the decision version.

I can think of a way to use the decision version to find a solution to the optimization version but i feel like it must be flawed:

First we do a binary search on `L` (the length of the tour- input to the decision version) so we can find the optimal L within a factor of epsilon (maybe this epsilon is the flaw? But I don't think that is the case.) Now we pick an edge and increase its weight to infinity. Now we do the binary search again on the new graph. If the value of the optimal solution has changed it means that the edge must be in the optimal optimization solution. By doing the same process on all the edges we can find the optimal solution.

As I said there must be a flaw in the above algorithm but I can't find it.


First, use binary search to find answer. Then, remove edges one-by-one if removing this edge will not destroy all remaining path with shortest length, until your graph is reduced to a single cycle.


Thanks for the answer, I have edited my post and I have basically the same solution. (Only instead of removing the edges I increase their weight to infinity because I think the TSP problem is defined only on complete graphs) But as I have mentioned in my comment I feel like this solution is flawed.


You can without loss of generality assume that the weights are integers (if they are rational, you can multiply all weights so that they become integral).

Hence, you can use bisection to compute the actual optimum, not involving epsilon at all.


Epsilon isn't a problem, as TSP is NP-complete even for integer weights. Your solution needs some modification for case where we have multiple optimal cycles (as you will find edges that are included in at least one cycle).

I don't think there is anything wrong with optimization been reducible to decision - it's quite common method both in theory and in practice.


I didn't say there is anything wrong with reducing the optimization to decision in general. But I feel this specific application to the TSP problem may be flawed. (for the reasons i mentioned before. i.e wikipedia being very specific about the decision version) For another resource see: https://www.ibm.com/developerworks/community/blogs/jfp/entry...

note: I am not implying that the above source is reputable. But it does hint that the solution to this problem probably is not this trivial.


Decision version of TSP is NP-complete. Optimization version is Cook-reducible to decision version of TSP. And it problem is Cook-reducible to P-problem, then the problem is itself in P. (note that it's not true if you replace P with NP, for example)


Do you have a reference to a reputable source that proves the optimization version is cook-reducable to decision version?


I don't. (it's always hard to find reference for easy problems)


One minor foible. Don't select edges that are part of the minimum path, instead remove edges that are not part of the minimum path until only the minimum path remains. This strategy works even on graphs with multiple minimum paths.


I don't get why having multiple optimal paths would falsify my algorithm. Can you provide an explicit counter-example?

Here is why I think the algorithm is correct:

At each step the edge that we are considering is either contained in all the optimal solutions or only some of them. If the edge is contained in all the solutions, increasing its weight to infinity would change the optimal solution and we pick that edge in our solution. Otherwise (if the edge is contained in only some of the solutions) increasing the weight would not change the solution because there is another optimal solution that does not contain that edge so we do not pick that edge.

So we can prove this theorem: At every step of the algorithm if an edge is picked, it is contained in all the optimal solutions.

So the algorithm does not pick any extra edges. Now we have to prove that it includes all the necessary edges. But that is easy because each time that we choose not to including an edge, we are sure that there is an optimal solution in the remaining graph so we are never left with a graph with no optimal solution.

I think you assumed that I meant we change the edge weights from infinity back to their original value at each step but that is not what I meant.


> If the value of the optimal solution has changed it means that the edge must be in the optimal optimization solution.

I interpreted this as meaning you were selecting the falsifying removals' edges, instead of removing the non-falsifying ones. You've got it.


Kind of like proving that you can't solve the halting problem, it lets us put to rest the idea that you can reduce the complexity of NP problems to a deterministic polynomial solution. The collective brainpower can be used to solve other problems.


Intuitively, almost everyone assumes that P != NP but it's been incredibly difficult to prove. If P == NP were proved it would be earth shattering, because lots of difficult problems may become solvable.


> Intuitively, almost everyone assumes that P != NP

Donald Knuth believes P = NP.

Source: http://www.informit.com/articles/article.aspx?p=2213858&WT.m... (question 17). Also cf. https://www.quora.com/Why-does-Donald-Knuth-think-that-P-NP


But keep in mind that he (and just about everyone who believes P = NP) thinks any proof of it will be non-constructive and that we will never actually find a polynomial-time algorithm for NP problems.


Is that really true though? What if someone solves does P equal NP problem by finding a polynomial time algorithm for a problem in NP, but the asymptotic run time is O(n^10^100)?

Sure the race would be on to improve that, but in the meanwhile no difficult problems would become solvable.



That was really entertaining, thanks for linking it.

I especially liked this bit:

David Johnson famously once said, For any instance {G = (V, E)} that one could fit into the known universe, one would easily prefer {|V |^{70}} to even constant time, if that constant had to be one of Robertson and Seymour’s.

I was curious so I tracked down this:

Johnson estimated that the hidden constant is “somewhat larger” than 2 ⇑ (2 ⇑ (2 ⇑ (h/2)) + 3), where 2 ⇑ t denotes an exponential tower of t 2s (2 ⇑ 0 = 1 and 2 ⇑ t = 2^2⇑(t−1)) and h is the number of vertices in H.


It's generally not the case though: problems that arise 'naturally', i.e have not been constructed to counter my forthcoming point, in P seem to be of low order.

If NPC problems where P in n^{10^100}, wouldn't we expect a wealth of problems between there and the myriad at n^2 or so?


Why would that be more surprising than the fact that some "natural" problems are polynomial and some are exponential, given that an exponential running time is asymptotically slower than any polynomial running time?


Often, just knowing that something is possible can be a catalyst for big advances.

It's easier to get people to devote resources to a hard but solvable problem than to one which may not even be solvable.


This interpretation is wrong. The class P contains for example O(N^(10^100000!)), that are not "solvable" by any remotely reasonable meaning of the word.


This is true, but it's difficult to imagine what such a problem could look like. In reality, algorithms tend to come in discrete complexity "units" with very small terms. A linear algorithm isn't just fast -- it tells you something about how such an algorithm works and thus something important about the problem it solves. A quadratic algorithm can be interpreted as a "considers all pairs from the inputs" algorithm. An n*log(n) algorithm does a divide and conquer step for each input. Moving to the exponential world, you have the 2^n "tries all combinations" and n! "tries all orderings" type algorithms, which again, make sense both as mathematical functions as well as behaviors that constitute sensible algorithms.

What does an O(n^10,000) algorithm do? What understandable problem yields a solution that behaves that way?


To follow up on this, in computational linguistics, there is a sharp divide between facts which take cubic time, like recognizing whether a string belongs to a context-free language, and facts which are halting-problem-hard, like recognizing whether two context-free grammars describe the same language. There doesn't appear to be much of a middle ground.

In another realm of computational mathematics, matrix mjultiplication is cubic, with optimizations that can approach quadratic time with lots of effort. It's conjectured that matrix multiplication can actually be brought arbitrarily close to quadratic time, but at the expense of ever-more-complex algorithms.

It could very well be that the biggest interesting exponent in P is 3 or 4.


The AKS algorithm has been reduced to an exponent of 6. That really strikes me as large for polynomial time algorithms though.


The coupled cluster method CCSD(T), considered the "gold standard" of quantum chemistry, is O(N^7). That's the highest-exponent polynomial time algorithm I'm personally aware of that sees regular use. The various coupled-cluster variants are all basically approximations to full configuration interaction, which is a much worse O(N!).


It is! At first, the following may appear quite counter-intuitive, but if you think about larger N and the fact that you often want efficient algorithms also when processing large amounts of data, it becomes clear that an exponent of 2 or 3 is, in practice, often the most we can realistically handle even when the problem is in P, and even though O(n^2) is extremely low from a computational complexity perspective.

For example, when you have an O(n^3) algorithm and want to process 10,000 elements (which are very few in many situations), it will be, as a rough estimate, 1,000,000,000,000 times slower than processing a single element. This will be acceptable only in very specific situations. As a rule of thumb, an exponent of 2 is already unacceptably slow in many practical situations, and at least on the verge of being unacceptable in others.


That explains why we don't have widely implemented algorithms of higher-degree polynomial complexity, but it doesn't explain why we haven't thought of many such polynomial algorithms that just aren't practical. After all, we can easily name lots of super-exponential algorithms. They're not directly useful for even moderately sized data, but they arise naturally from looking at certain types of problems, so we almost can't help but find them. The same doesn't seem to be true for polynomial algorithms with high degree.

You're essentially referring to the idea that could be loads of high-order poly-time algorithms, but we ignore them because those algorithms are slow, and therefore we don't use them. So in the space of all useful algorithms, we simply have a very biased sample.

I think the truth is more profound than that. There actually don't exist very many interesting* algorithms in the classes O(n^(k>3)). The real world we live in and model does not feature many interesting problems for which high-order polynomial complexity algorithms are natural solutions.

*Not sure what the right word to use here is...maybe non-trivial? The point I'm going for is to say that obviously we can invent an O(n^5) algorithm by simply nesting our loops five-deep and printing something, but that's a constructed example. I'm looking for algorithms that naturally arise as a solution to some problem.


For example, there is 0.8776-approximation that runs in about O(n^{10^100}) https://arxiv.org/abs/1205.0458v2 There are also a lot of graph problems like "distinguish 3-colorable graph from graph that can't be colored in 3 colors even after removing eps part of edges" with large polynomials (IIRC, we got something like O(n^{2^40000}) when tried to find degree explicitly).


While not strictly in the same vein as O(n^10k), the constant factor in the proof that L=SL is 3^(2^65536) at the smallest. A lot of combinatorial problems can generate such algorithms, since they tend to rely on "we can solve this problem if this condition holds, we can make this condition hold for all inputs if we blow up our input with this large structure [which is technically constant time since the structure is fixed and not variant on size!]." Consider that this is the field of mathematics that produced a number that is "3 raised to itself so many times that I need to describe an algorithm just to write that number" that was used as the upper bound to a solution (the lower bound of which was 6).


Your observation about correspondence between algorithmic complexity and its structure seems very interesting to me, thanks for sharing.


Some cryptographic implications on solving the P vs. NP problem or proofing some other stuff on average case complexity: https://www.karlin.mff.cuni.cz/~krajicek/ri5svetu.pdf


In practical terms, a solution to P=NP is very likely to have absolutely no impact on our algorithms, even if the solution is that P=NP. The main utility is in our understanding of complexity theory, and a better understanding of why certain problems, and certain instances of problems, have to be hard.

Let me explain why P=NP probably won't have any impact, since you see a lot of bullshit claiming lots of bad things will happen. The description of complexity classes like P and NP sweep a lot of details under the rug, and those details matter a lot of practical matters.

The more important of these matters is that complexity is based on worst-case running time, not average-case or typical-case. Often times, we can solve most instances of "hard" problems. We can factor most integers, for example--half of them are divisible by 2, and another sixth divisible by 3. If we limit are inputs to factoring products of two primes of roughly equal size, that is difficult. SAT is another example: it may be the canonical NP-complete problem, but many people think nothing of using a SAT solver (or its cousin, the SMT solver) to solve for things like "how do I find an input that can reach this program point." Except if solving that condition requires, say, finding SHA256(x) = binary digits of pi.

This is one of the main cruxes of P?=NP that doesn't come out much: it's not so much that problems are hard or easy, it's that there's this field of problem instances that seem to be intrinsically hard. Indeed, if you look at restricted versions of these NP-complete problems, you'll find that some restrictions still retain NP-complete, but a very slight reduction in those restrictions suddenly admits a very simple, fast, easy solution.

The related notion that you see people sometimes bring up and dismiss is that P and NP hide constants. This objection tends to be dismissed because most people have no familiarity with polynomial-time algorithms with massive constants. But such algorithms do exist, and they tend to crop up in combinatorial-style algorithms. Which, incidentally, is probably what a polynomial algorithm for an NP-complete problem would be.

Let me explain by analogy of a not-so-recently-solved problem that's a weaker but related notion to P?=NP, L?=SL. This question is essentially asking "could you solve every problem that's equivalent to checking for connectivity in an undirected graph using only constant memory" [1]. The answer turns out to be yes. In essence, there is a deterministic string of coin flips deciding your next vertex that will, if followed, eventually guarantee that you will hit every node in the graph that you can reach within a certain amount of time--if your graph has certain properties. And it's possible to transform every graph into one with this property by replacing every node with an expander graph that's only of size 3^2^16 at the smallest. But since the new graph is 3^2^16 * N, that large number is a constant factor that "doesn't matter".

These kinds of constants show up a lot in combinatorial algorithms. All of these algorithms basically have the property that you can solve the input fairly easy if the input has some structure, but to guarantee that the input has that structure, you need to embed it in a very large instance. I argue that, if P=NP were to hold, it would have a similar form to these algorithms. We know that there are some instances that just seem to be intrinsically harder than others, and we know that there are large classes of instances that can be easily solved with fast algorithms. If we haven't been able to generalize the fast algorithms to cover the hard instances but a "fast" (ignoring constants) algorithm must exist, then the apparent complexity has to be generated from sort of fiendish complexity-multiplying, combinatoric structure.

We already see in modern security algorithms that ciphers and hashes have to carefully choose their parameters to make sure they stay in a hard subset of their input algorithms. So even if P=NP, the hard subset is likely to remain much harder than the easy subset, and that gap is sufficient to maintain security.

[1] A very imprecise characterization.


_The Golden Ticket_ is a non-technical book discussing the implications of solving P vs NP

http://goldenticket.fortnow.com/


This is a really nice list of the implications, but breaking public-key cryptography is at the top of the list: https://softwareengineering.stackexchange.com/questions/1488...

I should say, theoretically breaking public-key. In reality the problem may remain too hard to brute force even if the P vs NP problem is solved.


> breaking

This preprint "implies P not equal NP".


I remain optimistic, but some potential holes are starting to show. Scott Aaronson: (http://www.scottaaronson.com/blog/?p=3389)

"To everyone who keeps asking me about the “new” P≠NP proof: I’d again bet $200,000 that the proof won’t stand, except that the last time I tried that, it didn’t achieve its purpose, which was to get people to stop asking me about it. So: the paper claims to give an exponential circuit lower bound for Andreev’s function, but that function as defined in Section 7 of the paper seems clearly to have a polynomial-size circuit, based on polynomial interpolation (thanks to Luca Trevisan for this observation). So I don’t see how this can possibly stand. Please just stop asking, and if the thing hasn’t been fully refuted by the end of the week, you can come back and tell me I was a closed-minded fool."


Say you wanted to let a computer search for a proof of P!=NP - which axioms would you start with and which rules to transform the axioms into additional valid statements?


I wonder if the algorithm to search such a space of axioms and transforms for a solution would itself be P or NP. :)


Almost certainly so super-exponential that NP appears easy in comparison.


As far was most mathematicians and computer scientists would go, ZF and predicate logic together with a fixed definition of turing machines.


As someone with an undergrad level knowledge of the problem can someone please outline the importance of this research. Is this research done in some bounded domain. From what I have read in the comments, there seems to be other papers which have also proved P!=NP. So what does this research add...Just curious, feel free to ignore this...


> From what I have read in the comments, there seems to be other papers which have also proved P!=NP.

None of those papers have been accepted as correct, the problem remains open.

There's a gazillion articles describing the context and importance, and I suggest you do a web search for something like "P vs NP and why it's important".

Here's one: http://www.scottaaronson.com/blog/?p=459

HN discussion: https://news.ycombinator.com/item?id=1605415


Thanks for the reply. Thats a great article. I have read a lot of articles like that, in fact was planning to write one myself. Just wanted to know more about the paper submitted i.e. what it proposes and the methodology adapted. I tried reading it but couldnt parse the data with all the jargons.


That's not the question you asked. If you want to know more about the context of this specific paper then you need to (a) wait for the people in the area to start writing explanations, or (b) ask more specific questions about specific terms.

A good place to start to get a sense of what's going on with this specific paper is this thread, but more particularly, the articles linked from it. One that I think is a great starting point is this one:

https://johncarlosbaez.wordpress.com/2017/08/15/norbert-blum...

But you should perhaps start by asking the question you actually want the answer to - it sounds like you didn't. So take a moment, do some reading, then come back with particular questions. They may already be answered, but before you did the outside reading you didn't realise. That's what happens to me.


So at least according to this P != NP. I’ve always been skeptical of this just because if you find an efficient optimal algorithm the problem moves between the classes.


>if you find an efficient optimal algorithm the problem moves between the classes

This hasn't ever happened.

You can prove an algorithm is in NP-hard by reducing another NP-hard problem to it. You can prove an algorithm is in P by showing the algorithm.

If you find and efficient algorithm to a problem in NP-hard, you show P == NP. No one has ever done this.

P != NP is considered by many to be most likely true.


> You can prove an algorithm is in P by showing the algorithm.

Thanks for clarifying - I always though that's what the above meant but I didn't realise that the reduction side was also important.


If true, would this be an actual 100% solution, or just an indicator that P != NP? This is all way over my head but the word "approximator" makes me wonder.


>> This implies P not equal NP.


> This implies P not equal NP

I did some graduate level research on P =? NP, specifically in the SAT space <https://en.wikipedia.org/wiki/Satisfiability>.

In particular, I helped design MARMOSET (Marmoset Automated Reasoner Mostly Only Solves Easy Theorems), a competitive SAT problem solver. <http://www.cs.unb.ca/research-groups/argroup/marmoset/> . (It's a cool name... I didn't come up with it :))

The conclusion I drew was:

1. P != NP because you can convert in polynomial time every SAT problem down to Horn clauses, which are P to solve, plus non-Horn clauses that cannot be converted i.e. have intractable intrinsic NP complexity whose reduction to the polynomial space requires "clairvoyance" of the quantum computation variety.

2. Nobody's really interested in a proof that P != NP.

That said, I only spent a couple years at it, and my memory may be faulty and I might change my mind if I revisited the issue. Part of me has always felt that the Horn clause reduction is a first step to isolating problems for a next step, but again — it's been a long time.


>Nobody's really interested in a proof that P != NP.

I doubt that.


I believe what they are saying is that the entire field just assumes P != NP and so if the proof came to be P != NP, then most people would just have their inklings confirmed and that's about it. It wouldn't really cause a shift in research in any of the CS departments. However, if P = NP, you can bet on a renewed interest in finding polynomial time algorithms.


A proof would likely involve novel techniques and open up new areas of research.


There are a bunch of major assumptions which are not rigorously proven, but generally assumed (hence assumption) to be correct, frequently backed up by a large body of heuristic evidence.

P!=NP is one of them.


That's like saying no one is interested in a solution to the Riemann Zeta Hypothesis. It's thought to be true, and mathematics have been derived from its assumed truth, but there is still major interest in proving it to be true.


The difference is that if Riemann Zeta is incorrect, many theorems that are based on the hypothesis being correct will also be invalidated. Whereas assuming P != NP is limited to a relatively small area of complexity theory. I think a much more equivalent conjecture in complexity theory would be the unique games conjecture. Because the unique games conjecture already provides certain problems to be completely characterized (see: https://en.wikipedia.org/wiki/Unique_games_conjecture#Releva...). Thus, proving this to be true kind of closes the chapter on many problems. Whereas proving P != NP true still leaves many gaps open.


> Nobody's really interested in a proof that P != NP.

This is not the case. In my day job I need to worry about what happens if ECDSA is broken. One way that can happen is quantum computation -- but that has a relatively transparent development timeline we can plan for. The other way in which ECDSA could be broken is if P==NP and the discrete log problem can be transformed in polynomial time into another polynomial time algorithm. All of our customer funds could be stolen at that point, with liabilities in the hundreds of millions or billions of dollars.

That's lot of customer money on the line if that happened. My employer might need to pay BIG money for insurance against a P==NP break of ECDSA, or else risk going bankrupt if it happened. A proof of P!=NP would translate directly into cost savings, either in not getting that insurance or in drastically reducing premiums for it.


Knowing P?NP does not imply much in practice unless the answer precisely bounds complexity. N^(N*1e-100) is exponential yet very practical while N^1e100 is polynomial yet is useless in this Universe.


Knowing P!=NP is immensely practical however.


> I doubt that.

Let me clarify: Nobody appeared to be interested in funding a graduate student to prove P != NP.


Your clarification makes sense - a PhD advisor with a bit of NSF or other grant money would likely only fund a grad student to work on a problem that has a reasonable likelihood of generating some concrete, positive, and publishable results. This typically means an incremental result on a problem of interest to the research community.

That having been said, an accepted proof that P != NP would result in a Turing award and an additional $1M for solving one of the seven Millenium Prizes. This has been an open problem for decades, and it is a problem of enormous importance and visibility.


Although that might be based on the perceived likelihood of success. After all, pre-Wiles, I doubt there was much chance of a grad student getting funding to prove Fermat. Nor is a grad student likely to get funded to prove the Riemann Hypothesis.


Because (I'd guess, as a PhD student in another CS field) any advisor worth its salt would advise a grad student to work on something else first, get tenure, and then maybe approach this problem. Until yesterday, most researchers agreed that the problem was unapproachable. Nowadays being a researcher is a job that requires steady progress, so you must focus on approachable problems. For P vs NP there are tons of results on classes of techniques that _cannot_ work—you'd have to learn those first to make a serious attempt.


I'd challenge you on that second one. There's literally a million dollar prize for out this one. Lots of people are interested. https://en.wikipedia.org/wiki/Millennium_Prize_Problems#P_ve...


> In particular, I helped design MARMOSET (Marmoset Automated Reasoner Mostly Only Solves Easy Theorems), a competitive SAT problem solver. [...]

Now that is some good SAT solver name - keeping user's expectations low, I guess? Nice choice! ;)


tl;dr Not crackpot, not proven, interesting math. There's a discussion on Google+ started by John Baez, which includes a comment by Gowers: https://plus.google.com/+johncbaez999/posts/hGshTfdimpZ

There's some interesting ideas in the paper, the researcher has done good work in this area before.


If's its legit proof, they author should be able to explain why SAT2 is in P and SAT3 is NP-complete. That's my BS test.


Why? Proving something doesn't mean we understand all implications of said proof or how to apply it to a particular situation.

Think about non-constructive proofs.


It's an important special case. You need some sort of filter for all the crazy submissions of potential proofs. I am not the only one who uses this problem as the filter.


> Think about non-constructive proofs.

To me it does not seem that the author of the paper uses weird "coffeinated" non-constructive proof arguments in it.


Whether or not this proof stands, this author, a tenured professor whose professional research is in this area, seems exceedingly likely to know this very standard material.


Regardless of whether the proof is legit, the author should be able to explain that. It's undergrad CS after all.


What's a good explanation of that?


2SAT is boring because given a clause with one unfulfilled literal, you know the other literal must be satisfied. There's no choice to be made. This limits backtracking enormously. It can even be solved in linear time (although that's nontrivial)



There's really nothing there - 11 likes and 1 comment ?

Also, this very same Facebook link was linked to at https://cstheory.stackexchange.com/questions/38803/is-norber... by https://cstheory.stackexchange.com/users/24/opt

Neither this HN user, that FB user nor that stackexchange user seems like anyone important, respected or relevant in the field of CS of this article (correct me if I'm wrong).

Smells like a bad attempt at karma whoring to me.


That 1 comment raises very serious doubts about that paper. The paper is basically assuming that a polynomial time solvable problem takes super-polynomial time. If it's using that fact in an integral manner then that kills the proof. And even if not, just the fact that the paper is making such a basic mistake raises questions as to how solid the rest of the paper is. Also the guy who made the comment (Luca Trevisan) is a very famous researcher in theory/crypto...


I wonder if crypto values will skyrocket if this is proved


Interesting, I would assume it would tank since there is a way of undermining the system.


No, that would be if P=NP. This is either a proof that P!=NP, or not a valid proof of anything.


1. no google scholar account 2. three publications on the arxiv.

I am willing to toss this out of hand...but look at yitang zhang or perelman. who knows? best of luck


although not prolific, the guy has published in top cs theory conferences and journals: http://dblp.uni-trier.de/pers/hd/b/Blum:Norbert

dblp is the proper place to check computer scientists' record, not arxiv/google scholar.


I am not really sure I agree. At least in my part of CS in the united states arxiv/google scholar are pretty omnipresent. Thank you for pointing me towards a new resource.


Until the singularity P will always be different than NP.


Haven't we seen too many proofs that demonstrate N!=1 but forget to also show that P!=0?


If this is true (and the proof has no flaw), it would be huge and pleasant (many expect it, although there are some who think it's a problem that might be unprovable in our current logic framework). So let's see how it works out.

However the author missed that there is a special case for N == 1, where actually P == NP (sorry, I could not resist).


Don't forget the case where P == 0


You're both wrong.

P is the union of all TIME complexity classes defined by n^k for every natural k.

NP is the union of all NTIME complexity classes defined by n^k for every natural k.


If p != np, then comp sci will lose much of its appeal. There is an underlying hope behind the field that p = np, otherwise most problems of interest are intractable, and programmers are no longer masters of the universe.

This is probably why there is not a proof yet, since the truth is undesirable.


I'm not sure what you mean by "appeal." But, it seems to me that if P = NP, and if we can find a constructive proof of this fact, i.e. someone presents a P-time algorithm A_L deciding an NP-complete language L, the field of CS in a sense would get much less interesting, because although we would have answered arguably the most important question ever posed, there would be much less of a need to research and develop efficient algorithms or approximation strategies. We'd probably just focus on trying to improve the runtime of A_L and the reductions from other NP languages.


Yes every field is of interest to its specialists. But the popular appeal is that comp sci promises the ultimate explanation of a materialistic reality. At least this was its appeal when I chose it for my major, and it drives the religious transhumanism and the AI hype and arguably the funding of IT. But if the public learns seemingly trivial problems are inherently intractable or impossible for computers, that glamorous spectacle will shatter.


> If p != np, then comp sci will lose much of its appeal.

Say what? A lot of work is based on the assumption that P!=NP so you have to be clever in other ways.


All that work is boring to the non specialist. It's the promise of unlimited power that gives IT glamor.


That's like saying Math will lose its appeal because there is no algorithm to find prime numbers.

P != NP is natural, you just gotta roll with it.


There is an algorithm to find prime numbers. It might not be efficient but it exists.


Uh... P ISN'T known to equal NP, but people seem to have a lot of fun using computers anyway.

Arguably, P=NP makes CS less fun, since an efficient "universal algorithm" could solve a vast class of problems in polynomial time, while P != NP means there's more room for programmer creativity making intractable problems tractable in special cases.


No, all of cryptography is based on the assumption that P !=NP


  This is probably why there is not a proof yet, since the truth is undesirable.
You are presenting an explanation to a false observation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: