Hacker News new | past | comments | ask | show | jobs | submit login

There's a difference between P(Q(Q)) not being able to finish its evaluation and P(Q(Q)) stating that Q(Q) will never finish. If P does not finish, then you have no answer. If you have no answer, then P does not do what we defined it to do.

But I think you're missing the more important point. We defined P to be a certain way, and as a result of that definition, we also demonstrated that it cannot do what we defined it to do. That's the fundamental problem. If you want to think of it in terms of what you're currently hung up on, then consider that implicit in our definition of P is that it will finish.

Also consider that if P does not finish, it's no better than just running the program with its input and observing the behavior.




No, P(Q(Q)) will finish, because P(Q) will not, and Q(Q) will call P(Q), preventing Q(Q) from halting.

As to the definition, I suppose, though it's largely implicit, which is effectively a big no-no in proofs. But that doesn't show that a real-world P must finish. If it doesn't, the failure of it to finish can be detected as detecting a paradox: just detect if the halt-detection halts.


We're doing proof sketches, not formal proofs. In a formal proof, it would be explicit. A proof sketch is to show the big concepts behind the proof. The formal proof grinds through the details to make sure those concepts do, in fact, pan out as we think.

P(Q) will finish because we defined it to do so. It will return either true or false. The problem isn't with the termination (or non-termination) of P(Q), it's with Q(Q). The termination status of Q(Q) is what gives us the paradox: its behavior is the opposite of what P(Q) said it would be. If we can construct a case where P behaves contrary to its definition, then it cannot exist.

In other words, we asked our oracle "Will Q finish?" and our oracle gave us a reply. Then Q went off and did the exact opposite, proving the oracle wrong. But our oracle is, by definition, always right, so we have a Serious Problem. We resolve this Serious Problem by recognizing that we asked our oracle a question that it can never know the answer to.

Remember that Q(Q) is what gives us the paradox in the first place. P(Q(Q)) is nonsense. It's like recognizing that 4/0 is undefined, then trying to define cos(4/0).

I think your discussion about a "real-world P" begs the question. (In the original meaning of the phrase.) The existence of P presents us with a paradox, so how can there be a real-world P?


Because you can re-define the problem by assuming P finishes if and only if it does not encounter a paradox. It's not hard to see it as possible, as we're already assuming P exists and finishes: allowing it more flexibility should only make it more likely to exist.

By allowing it to not finish on paradoxical situations, it can exist in what would normally be a paradox. Thus P(Q(Q)) can indeed exist, because P(Q) does, even though it does not return.

There may still be a way to prove such a paradox-avoiding P cannot exist; I'm not claiming that. Just wondering why we require it to return, when as far as I can tell there's no legitimate, real-world reason to do so. Sure, it makes the proof easier, and allows us to use simple logical grammars... but since when has "simple" always been correct?

The equivalent would be the oracle remaining silent, denying the paradox the possibility of existing. If no statement is made, no paradox can be formed.


P does not encounter a paradox. P is the paradox. We encounter the paradox because we observe that P is incorrect. You can't dance around a paradox at runtime. A paradox means either one of your assumptions is invalid, or your logic is wrong. If it exists, there's no getting around it.

When constructing proofs, we can make assumptions and then run with them. But it's not valid to assume magic - and that's essentially how you want to define P. You want P to be able to detect that it has been used in a paradoxical situation. But it's not reasonable to assume a function knows how it's being used; to do so is basically magic.

We require that P returns because we defined it as a function in the mathematical sense, and mathematical functions have to provide an answer (even if that answer is "undefined"). P is a function that has to return true or false. No other responses, including not giving us an answer, are valid.


We encounter the paradox of P because of the definition.

It doesn't need to detect its own use (though remember: reflection), it just needs to see when a separate use of it will cause a paradoxical situation. The proof wouldn't exist if P(Q) weren't part of Q, because that's what sets up the paradox: it's own return cannot be correct... unless it has another output, maybe?

And when I see a proof that a program cannot detect paradoxes, I'll accept that a modified P cannot. But I've never heard of anything along those lines. Until then, a mis-applied proof simply stifles attempts which may have been fruitful.


Honestly, I think the difficulty you're having is one of form, not content. That is, you understand the mechanics behind the proof, but you don't accept that the form of the proof works. Your objections hinge on the implications of a paradox, and our seemingly arbitrary construction of Q.

Instead of banging your head against this proof more, I think you may want to read up on proofs in general. Your objections would apply to any proof by contradiction.

Example: consider that Q exists even if we don't define it. We discovered it, not invented it. Given P, we can define Q', but the problem with Q is still there. But this really has nothing to do with this specific proof - it's more fundamental than that.


> We encounter the paradox of P because of the definition.

The point of the problem is this: to create a program P that will tell us if f(x) halts or not. Q is simply a function that breaks P so that it cannot give the right answer. Thus making P impossible.

Paradoxes are irrelevant to the issue :)

(You're over thinking this a lot; read one of the formal proofs and it will address pretty much all of your points)


Because you can re-define the problem by assuming P finishes if and only if it does not encounter a paradox. It's not hard to see it as possible, as we're already assuming P exists and finishes: allowing it more flexibility should only make it more likely to exist.

And now P is a nonsolution to the original problem (the halting problem). The original problem was to come up with a program that always decides whether its input program halts. We already can build programs that identify the programs which halt and say nothing about programs that don't -- this is a trivial extension to any working interpreter for the language in question.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: