There are many problems of the form of several sets of options, where a possible solution to the problem represents a particular choice for each possible option. An example might be a diet where there is a list of acceptable foods (each with a particular calorie value) and a maximum calorie recommendation for each day, the problem being choosing which foods to eat in a particular day. A characteristic of these types of problems is that they are relatively "easy" to check the answers. It's easy to determine if a particular set of food choices adds up to under or over the maximum recommendation for the diet. If you were to tag each food option with a "deliciousness" value then you could also total up the deliciousness index of a daily set of food choices and judge them relative to each other.
Some of these problems are such that the total number of possible solutions grows extraordinarily fast relative to the "size" of the problem (e.g. number of possible food choices). So fast that for relatively "small" problem sizes (perhaps only 100 or so food choices) it becomes fundamentally impossible to compute every single possible solution to the problem (even if you had a computer that ran as fast as the laws of physics allowed and was the size of the entire known universe). More so, some of these problems are such that there is no way to come up with an exact solution that is faster than being forced to do a brute force check of all or many of the possible solutions.
Now, this doesn't mean that (depending on the problem) you can't generate quite good approximations to the most optimal solution to the problem by using shortcuts, but exact solutions are impractical.
As it turns out, many of these types of super-hard problems are closely related to each other. Such that you can translate one variety of problem (such as how to travel through several cities in the least amount of total distance or how to schedule a large number of meetings between a large number of people using a fixed number of meeting rooms) into another variety. Which would mean that if you had a shortcut to "easily" (i.e. practically) compute a solution to one you could come up with solutions to all of the other classes of problems as well, merely by couching those other problems within the frame of the problem you had "solved".
This is the essence of P vs. NP. P is the class of problems where the cost of computing an exact solution doesn't necessarily grow too fast relative to input sizes to be impractical with real-world computers. NP is the class of problems that are equivalent to P problems if you happen to have a magical computer which could evaluate and compare any number of possible solutions simultaneously. Naturally, NP will include all of the P problems, so NP-complete problems are taken to be the set of problem for which only the magical computer would be suitable.
The question is whether or not there is actually a difference between P and NP problems. As mentioned, all NP-complete problems are essentially equivalent, if you come up with a way to turn one of them into a P-problem then you can turn all NP-complete problems into P-problems. However, these problems are so difficult to grapple with that we don't know whether this is possible or impossible. Currently we believe that it is impossible, that there are no ways to come up with exact solutions to NP-complete problems except through trying many or all of the possible solutions (making exact solutions impractical or impossible, depending on the problem size).
If P was equal to NP then it's possible that there would be some consequences (some good and some bad). It might be possible to come up with exactly optimal network routing, for example. But it might also be possible to crack public key cryptography.
Proving that P was not equal to NP would have some consequences as well. It would mean that we couldn't ever hope for anything better than approximate solutions for some problems (e.g. routing and scheduling) and public key cryptography might be generally reliable (with a large enough key size).
There are many problems of the form of several sets of options, where a possible solution to the problem represents a particular choice for each possible option. An example might be a diet where there is a list of acceptable foods (each with a particular calorie value) and a maximum calorie recommendation for each day, the problem being choosing which foods to eat in a particular day. A characteristic of these types of problems is that they are relatively "easy" to check the answers. It's easy to determine if a particular set of food choices adds up to under or over the maximum recommendation for the diet. If you were to tag each food option with a "deliciousness" value then you could also total up the deliciousness index of a daily set of food choices and judge them relative to each other.
Some of these problems are such that the total number of possible solutions grows extraordinarily fast relative to the "size" of the problem (e.g. number of possible food choices). So fast that for relatively "small" problem sizes (perhaps only 100 or so food choices) it becomes fundamentally impossible to compute every single possible solution to the problem (even if you had a computer that ran as fast as the laws of physics allowed and was the size of the entire known universe). More so, some of these problems are such that there is no way to come up with an exact solution that is faster than being forced to do a brute force check of all or many of the possible solutions.
Now, this doesn't mean that (depending on the problem) you can't generate quite good approximations to the most optimal solution to the problem by using shortcuts, but exact solutions are impractical.
As it turns out, many of these types of super-hard problems are closely related to each other. Such that you can translate one variety of problem (such as how to travel through several cities in the least amount of total distance or how to schedule a large number of meetings between a large number of people using a fixed number of meeting rooms) into another variety. Which would mean that if you had a shortcut to "easily" (i.e. practically) compute a solution to one you could come up with solutions to all of the other classes of problems as well, merely by couching those other problems within the frame of the problem you had "solved".
This is the essence of P vs. NP. P is the class of problems where the cost of computing an exact solution doesn't necessarily grow too fast relative to input sizes to be impractical with real-world computers. NP is the class of problems that are equivalent to P problems if you happen to have a magical computer which could evaluate and compare any number of possible solutions simultaneously. Naturally, NP will include all of the P problems, so NP-complete problems are taken to be the set of problem for which only the magical computer would be suitable.
The question is whether or not there is actually a difference between P and NP problems. As mentioned, all NP-complete problems are essentially equivalent, if you come up with a way to turn one of them into a P-problem then you can turn all NP-complete problems into P-problems. However, these problems are so difficult to grapple with that we don't know whether this is possible or impossible. Currently we believe that it is impossible, that there are no ways to come up with exact solutions to NP-complete problems except through trying many or all of the possible solutions (making exact solutions impractical or impossible, depending on the problem size).
If P was equal to NP then it's possible that there would be some consequences (some good and some bad). It might be possible to come up with exactly optimal network routing, for example. But it might also be possible to crack public key cryptography.
Proving that P was not equal to NP would have some consequences as well. It would mean that we couldn't ever hope for anything better than approximate solutions for some problems (e.g. routing and scheduling) and public key cryptography might be generally reliable (with a large enough key size).