P doesn't detect the paradox. We do. We have intelligence; algorithms follow a prescribed set of rules. No one is sure what, exactly, "intelligence" is, but we can say it's not the same as just an algorithm. We're also not guaranteed to be correct.
Remember that P is a program to detect whether any program halts. It's possible to write a program that can detect if some programs will halt. You just have to constrain what the programs can do to the point that the rules the programs can follow will not be Turing-complete. We could also use machine-learning techniques to try to recognize the structure of infinite-loop programs, but that's not the same as an algorithm that is always correct.
To state that requires the assumption that we are qualitatively different than a program. I believe that's been a question since far before Turing's time.
What if we are equivalent to a program? Then a program can be made to detect paradoxes.
I'm comfortable stating that we are qualitatively different than an algorithm. I'm also comfortable stating that we're probably similar to a Bayesian machine-learning process which is only able to make probabilistic determinations.
Same here, just felt it needed to be pointed out. There are plenty of people who do make the claim, in which case experience appears to disprove the proof.
Remember that P is a program to detect whether any program halts. It's possible to write a program that can detect if some programs will halt. You just have to constrain what the programs can do to the point that the rules the programs can follow will not be Turing-complete. We could also use machine-learning techniques to try to recognize the structure of infinite-loop programs, but that's not the same as an algorithm that is always correct.