> Note that Mario cannot win if he falls from the top ledge (since must always remain large, he can’t fit through a one-tile-high entryway).
I'm pretty sure this is inaccurate with respect to the actual mechanics of Super Mario Bros, although the technique needed might not work with the particular layout shown. As I recall, it's possible to build a bit of momentum and crouch into a one-block gap, then turn around and stand to have the collision push Mario through the gap. A similar technique combined with another quirk of the engine (it's possible to very briefly land on the seam between two blocks in a vertical wall) is used for the Minus World glitch.
You are correct, and it doesn't even require a one block gap. Simply clipping the corner of a block can allow a player to clip through as well, which is how you access world -1. That particular glitch can be exploited at either the very top of the 1-2 pipe, or at the open edge of that same pipe that Mario would normally enter. Every video one can find about world -1 will exhibit this glitch.
The article is really only technically correct, as those are glitches and certainly not intended, but in actual practice, yeah, it's not totally correct.
The games themselves are fixed size. They can't be NP-hard, since NP-hardness is a property of difficulty with respect to input size.
They show that you can generalise them and build levels that are NP-hard: for example, in the case of Pokemon, they build systems requiring that the player have a specific team (one that is very unlikely to ever actually be used).
Here is an actual implementation of trying to implement a general AI to play all NES games. It's pretty interesting, but his approach does involve the algorithm having access to the raw NES memory map:
As per usual, the comments here have a loose understanding of NP-Hardness. Anyone discussing limited space to store the levels, simply doesn't understand what NP-Hard means. A problem can still be NP Hard even if your input size is small.
If the input size is bounded, then the problem H can't be NP-hard unless P = NP.
Proof:
A decision problem H is NP-hard when for every problem L in NP, there is a polynomial-time reduction R from L to H.
By assumption, H has finitely many instances, so precalculate a binary lookup table T. Accessing a lookup table takes time polynomial in L's instance size, namely O(1).
The algorithm "T . R" is then a polynomial-time algorithm for H, so P = NP.
It's irrelevant how long the lookup table took to create: P = NP if such an algorithm exists, regardless of how difficult it is for humans to find the algorithm.
I don't understand this, AFAIK a problem is NP-Hard or NP-Complete (or P) in _general_, that is, over all instances. But not so for a specific instance. Wouldn't these games count as just one specific instance of the problem?
The papers actually say "The rulesets allow the creation of levels which can encode an NP Hard problem". That just means if you know the answer, it's fast to show it, but finding the answer takes a lot of time.
Usually this boils down to creating AND/OR gates using level tiles, and combining them together to form arbitrary computational problems. Usually 3-SAT, because it's straight forward to encode in logic gates.
It depends on whether you regard the game as limited to the pre-defined levels or allow new levels to be constructed using the same rules. The constructions discussed in the article are all about encoding NP-Complete problems in new level layouts, such that the player has to make a series of irrevocable decisions that need to encode a solution to the NP problem in order to solve the level.
NP hardness is not about every instance being hard. It's a worst case notion. The paper provides an infinite family of instances which are hard assuming P is not NP. Any poly time algorithm solving for all instances has to solve those hard instances too, hence the game is hard in general.
I wonder if this actually proves that Super Mario Bros. is actually NP-Hard.
The thing is, breaking bricks with Koopa shells was actually introduced in Super Mario Bros. 3. Before that, bricks could only be broken with big Mario.
DeepMind has produced a few agents playing well classic games.
When the authors of this paper are saying that these games are NP-hard, they're taking some "poetic license": complexity classes are defined for problems that have instances of increasing sizes (as the complexity is defined as a function of the size). Super Mario is only one instance.
So they analyze generalizations of the game, and as the game size grows to infinity it becomes very hard to play it.
For the real games, DL does pretty well (and even simpler agents do well).
One very small sample -- I had a student try training tensorflow to solve NP-hard problems, we couldn't get it to do any better than random guessing. I wasn't really surprised, because you need to make many interlocked decisions, all in one go, and all correctly. Also, it's (at least for me as a human) almost useless to try to learn from other similar looking problems to spot patterns in answers.
Can we define human-level intelligence, which is the stated end goal of AI, as the ability to find more or less approximate solutions to NP-hard problems, as well as undecidable problems (we solve the halting problem every day as part of our job) in polynomial time?
I'll treat this like an interview question, and anyone can correct me if I'm wrong: My understanding of it is that it means it's impossible to predict the best solution without brute forcing all possible combinations of action... am I close there?
Close but no cigar. It means that you can verify a solution in polynomial time. Remember that P is contained in NP i.e. some NP problems can also be solved in deterministic polynomial time.
It is widely believed that some problems in NP are not also in P, i.e. that the solutions cannot be found in polynomial time. Note that brute force may not be the most efficient way to solve such problems. For example, integer factorization is believed to not be in P, but the best known solution is much better than brute force.
NP Hard problems are problems that can be used to solve all other NP problems with only polynomial time overhead. For instance, you can "encode" any NP problem as a 3COLOR problem, and then solve the 3COLOR problem instead. The reason NP hardness is interesting is that if you could solve an NP hard problem in polynomial time, then all of NP can be solved in polynomial time, so P=NP.
Not all NP hard problems are actually in NP. In other words, there are some problems that can be used to solve any NP problem, but whose solutions cannot be verified in deterministic polynomial time. The problems in NP that are also NP hard are called NP complete problems; 3COLOR is an example of such a problem. If you can prove that a given NP complete problem cannot be solved in deterministic polynomial time, it means P!=NP.
In the worst case, it is commonly thought (this is P vs NP) that you will have to test a super-polynomial number of solutions. It doesn't have to be all of them.
In addition to sibling correction, np means that a given answer can be verified in polynomial time (PTIME is an algorithm in O(n^k) for input size n and a certain constant k, as opposed to e.g. O(k^n)). An example would be travelling salesman (“is there a Hamiltonian path shorter than L for given graph?”) which you can easily verify given a path: does it visit every vertex, and is it shorter than L?
Strictly speaking, NP isn’t so much about the finding of solutions, just the verification of their correctness. The finding part is P [edit: which is contained in NP].
P=NP would mean that if we can easily verify a solution , then we can also find it easily in the first place. P!=NP means that there are some problems which are easy to verify the solution for,but hard to find a solution for.
> Strictly speaking, NP isn’t so much about the finding of solutions, just the verification of their correctness.
Because nitpicking is fun, I'll add my contribution and say that the traditional definition of NP is actually about the finding of solutions, and is the etymology of the phrase "non deterministic polynomial time". That is, a non deterministic machine (one that could magically search multiple states at once) can find a solution in polynomial time.
Of course, since our non deterministic algorithm could just be the enumerate all witnesses, and verify them, then the two definitions are equivalent, except for the verification-based one being much more intuitive.
I wouldn't count that as close for numerous reasons. If this were an interview question, I would expect discussion of NP-complete vs NP-hard at the very least.
> it's impossible to predict the best solution without brute forcing all possible combinations of action
That's not true; all possible combinations are not required to be checked; frequently a greedy algorithm can eliminate the majority of possibilities without actually checking them.
"Impossible" is also a very strong word.
Furthermore, there are plenty of problems where we currently can't solve them without bruteforce ("impossible" to do so), but we don't know if they're NP-complete/even in the NP-class.
However, the really important detail you're missing which makes your answer completely wrong is the verification of the proof.
If a problem is of NP Complexity, than it must be computationally easy to determine that an answer is correct in polynomial time.
Your answer simply stated that a solution could be found by brute-forcing, not that the answer is easy to verify once found, and not that said brute-forcing must not be in non-polynomial time.
By missing those two details, your answer is so far off the mark I'd recommend passing in an interview based on that alone.
...your answer is so far off the mark I'd recommend passing in an interview based on that alone.
What kind of work do you do that requires regular detailed analysis of the complexity class of algorithms?
It's been my experience that it's enough for an engineer to be able to say, "This algorithm will (not) run in an acceptable time for our expected input sizes," and that the precise definitions of NP complete and NP hard are best forgotten and looked up on Wikipedia when needed.
A problem is in NP if a Turing machine, given the problem, can verify a given solution (assuming that the size of the solution is polynomial in the size of the problem) in a number of steps polynomial in the size of the problem.
An example would be 3-SAT, which takes a boolean predicate of a particular form (that it is conjunctions of disjunctions of 3 variables), and then an assignment of the variables, and checks if it satisfies the predicate. You can do this in polynomial time, so 3-SAT is in NP.
A good read for learning fun and practical examples of how to define problems for a solver. I never learned terribly advanced math because it bores me, but papers like these are more accessible and help me get into the mathematician's mindset.
I'm pretty sure this is inaccurate with respect to the actual mechanics of Super Mario Bros, although the technique needed might not work with the particular layout shown. As I recall, it's possible to build a bit of momentum and crouch into a one-block gap, then turn around and stand to have the collision push Mario through the gap. A similar technique combined with another quirk of the engine (it's possible to very briefly land on the seam between two blocks in a vertical wall) is used for the Minus World glitch.