What does this mean for computer science (and other areas)? I am familiar with the basic idea, but really don't know much more or how this would affect what I do?
We have lived with NP \neq P being the prevailing opinion for nearly three decades. If NP \neq P there will be no dramatic engineering aspects coming from the result, as focus has long ago shifted towards heuristics and approximations. If NP \eq P, we may still not know any underlying algorithm for solving TSP efficiently, but there will be hope of finding one and the search will continue.
But it may be more realistic to consider what happens if NP \neq P. The main change for CS will be the shift in the focus of research.
If the proof is correct, there will be a brief flurry in which people find simplifications, restatements, and shallow applications to other areas of research.
But for three decades this has been the motivating problem in theoretical computer science. I think there will be a lot of soul searching, and perhaps a subdued atmosphere in the research community, but then attention will turn back to the topics in which people are actively exploring. However, there will perhaps never again be such a unifying, fundamental problem in CS.
If P were to be equal to NP and we found a polynomial algorhythm to just one NP problem, then through polynomial conversion, we should be able to solve all, or at least most, NP problems in polynomial time. This would be very superior to heuristics even for the most naive of implementations for most relevant NP problems.
This is incorrect. You would need to find a polynomial algorithm for at least one "NP-hard" problem, not any problem in NP. And a polynomial, exact algorithm would not necessarily be superior to heuristics for relevant real-world NP problems such as 3SAT... what if the polynomial were O(N^1000)?
You are right, I confused NP with NP-hard. However, I think it's safe to assume that if a polynomial solution exists for a NP-hard problem, the complexity wouldn't be so incredibly high than it's better to use heuristics.
> I think it's safe to assume that if a polynomial solution exists for a NP-hard problem, the complexity wouldn't be so incredibly high than it's better to use heuristics.
How is this possibly safe to assume? The “heuristic” methods aim for a “good enough” solution rather than “the best solution”. Whyever would a polynomial time algorithm for exactly solving an NP-hard problem be faster than polynomial time “heuristic” algorithms (which depending on the acceptable solution quality can be relatively computationally simple)?
I felt exactly the same way. I personally think (which is worth next to nothing of course) that if it's ever proven that P=NP (unlikely), the results will be mostly impractical.
It could mean that certain problems which are known to be "difficult" (cryptography, factoring multiples of primes, traveling salesman, bin-packing, etc.) may not be inherently "difficult" and may have solutions (yet to be found) which can solve the problem in polynomial time.
There are many things built on top of the fact that certain types of problems are not easily solvable - cryptography being the most obvious - so if it could be proved that algorithms that make these problems easily-solvable could exist, it would shatter the line of thinking about these types of problems.
The way I like to think about NP is: are there certain types of problems which we, as smart as we might be, cannot easily solve, not because we aren't smart enough but because of the nature of the problem itself?
(I have no formal education in this area so this is just my layman's attempt at an answer)
Then, on a tangential note: are there classes of problems harder than NP, that we could switch to for cryptography et al. if it turned out that P = NP?
There are classes of problems harder than NP. However, the problem of cracking any useful encryption algorithm is always going to be in NP. If it weren't, then decrypting a message would not be possible in polynomial time.
Is that correct? I'm not sure. NP problems are verifiable in deterministic polynomial time. That includes the hardest of NP problems (NP-complete problems, e.g. travelling salesman and graph coloring). Are there problems that are "harder" than that? I never learned about anything outside NP—I know there are other complexity classes out there (like probabilistic classes), but I don't know how they compare to NP. My guess is that anything outside NP is essentially a non-deterministic problem altogether, meaning the "answer" isn't formally a simple "yes" or "no."
> The way I like to think about NP is: are there certain types of problems which we, as smart as we might be, cannot easily solve, not because we aren't smart enough but because of the nature of the problem itself?
That's not entirely accurate. There are many problems that are untractable (i.e. can't solve them because of the nature of the problem) but have nothing to do with P=NP. Consider the halting problem, for instance.
There are many problems that are untractable (i.e. can't solve them because of the nature of the problem) but have nothing to do with P=NP
Can you elaborate some more on this? I'd like to learn more about this area. Are you referring to problems for which no polynomial time solution exists, and there is also no polynomial time method for verifying a solution?
> The way I like to think about NP is: are there certain types of problems which we, as smart as we might be, cannot easily solve, not because we aren't smart enough but because of the nature of the problem itself?
If you read it one way, it sounds as if P=NP would allow us to solve every formal problem. It doesn't. One very interesting example:
The concept of Kolmogorov Complexity. I'm not a CS theorist, so I can only give my layman interpretation, but if I understand it well it goes like this: given a particular string of bits, the Kolmogorov Complexity relates to the shortest possible program that can describe this string. In effect, that means: the ultimate compression. Well, the whole theory of KC says that you can't find this shortest possible program for every string. Not because it takes too much time to check very possible combination (P=?=NP) but because of the nature of the problem.
I don't think this is necessarily true. The gap between tractable and intractable is a human convention (albeit a reasonable one). It's quite possible that polynomial-time algorithms exist that technically defeat cryptography, but they can still be practically intractable if their constant factors are astronomical.
Of course, that's still a blow to cryptography, since polynomial increases in computing power will make cryptography-breaking algorithms run meaningfully faster. Still, I don't think it would immediately devastate cryptography.
I think what will be really fascinating is that if statistical mechanics does end up being the mechanism that finishes the puzzle I do think that CS will have to take a much harder look at making physics a more core part of the curriculum.
This would mean that the two largest results in CS in the past 20 years have both come from working with feet both in CS and physics (Shor's work on factorization with quantum computers being the other huge result, IMO).
I'd love to see undergrad CS, at least the theory tracks, to require a physics concentration in quantum and statistical physics.
This kinda covers it, but it would also mean that BPP = NP. This is because P != NP => BPP = P (which is equivalent to BPP != P => P = NP, which is a fancy way of saying that at least one of {BPP, NP} is P).
Since BPP (http://en.wikipedia.org/wiki/BPP) essentially encompasses problems that can be solved with very high probability chance in polynomial time (the definition requires probability only higher than 50%, but you can iterate a polynomial amount of time, and reach probability arbitrary close to 1), it is expected that BPP = P. Not to mention that the huge amount of derandomization techniques are more evidence that this is very likely to be true.
P = NP would touch almost every field known to man: maths (e.g. it would only be matter of search to take down the remaining Clay Problems), philosophy, engineering, biology, physics.
The tie to physics is particularly interesting: for example if Quantum mechanics was a nonlinear theory then we could physically solve NP complete problems [1]. Note the implication, this is not to say p = np => QM nonlinear. The utopian consequences of p = np are discussed very engagingly by Scott Aaronson here [2].
I suggest you also check out his blog as well - his writing style is clear and fun. Plus there was a recent discussion there that linked complexity theory to foundational physics.
Even if P = NP, the search for efficient polynomial time algorithms will be the key. For e.g., the problem of determining whether a given number n is prime/not was thought to be in NP, until Manindra Agarwal showed that "PRIMES is in P" (it's called the AKS algorithm). Its asymptotic complexity (as n -> infinity) is O(polylog(n)), but it is not widely used in practice, it is still "slow".
Essentially covers cryptography which isn't based on hard-to-factor equations. Though it's not the same area, similar solutions / approaches might survive a p=np discovery + algorithm.
But it may be more realistic to consider what happens if NP \neq P. The main change for CS will be the shift in the focus of research.
If the proof is correct, there will be a brief flurry in which people find simplifications, restatements, and shallow applications to other areas of research.
But for three decades this has been the motivating problem in theoretical computer science. I think there will be a lot of soul searching, and perhaps a subdued atmosphere in the research community, but then attention will turn back to the topics in which people are actively exploring. However, there will perhaps never again be such a unifying, fundamental problem in CS.