Oof... what's the term for being sunk by the very thing that you're warning against?
I was using a very hand-wavy version of the word "truth" in my reply, trying to frame it in terms of how people talk about capital T Truth in informal philosophy rather than, say, model satisfability in model theory (which completely side-steps this question by assuming its resolution).
What I meant was to express a hypothetical anti-Platonist position. One version of mathematical Platonism is a belief that only some subset of "plausible" (i.e. those expressed by a consistent set of axioms) mathematical entities are real and it is the job of logician-philosophers to feel out just what those are. Of course they must appeal to extra-mathematical and extra-logical principles to do so, but that is after all why Platonism is a philosophy rather than a branch of mathematics. This is a simplified version of the philosophy that underlies some efforts to find "the one true set theory" that extends ZFC.
A hypothetical brand of anti-Platonism could argue that every consistent set of axioms is similarly real or unreal. There is no reason to choose one over the other. The only thing that has any objective reality to it is the mapping of those axioms to the real world. That you can do either correctly or incorrectly. Hence if we're talking about capital T Truth (i.e. the reality of the world) it is captured in that mapping, rather than the axioms themselves. Therefore, e.g. there's no point to trying to divine "the one true set theory." Just use whatever you find useful.
Regardless my overall point is that these are all matters of philosophy that cloud the particulars of what is going on with Godel's incompleteness theorems, often by exaggerating their consequences.
Godel's theorems have philosophical ramifications, but then again so do basically all theorems given the right framing.
It is nonetheless true that Godel's theorems are a particular focal point of discussion around mathematical philosophy (but again one which has surprisingly few ramifications for mathematics as a whole) and a very fruitful one at that. I'm not dismissing the idea of having a philosophical discussion around it, only cautioning that it requires a deep understanding of all the seemingly paradoxical statements that it presents which defy attempts to concisely state its philosophical ramifications.
My point at the beginning of all of this is that I disagree with the pedagogical choice of a lot of introductory posts to the incompleteness theorems to mix in that philosophical content at the beginning. You can always do it later after first understanding a lot of seemingly paradoxical statements that arise from Godel's theorems (such as the ones I offered concerning Conway's Game of Life and the consistent system that proclaims the inconsistency of its subpart).
However, by presenting the philosophical parts up front, that tends to be the thing that people latch onto rather than the inner workings of the theorems (which are extremely important to understand to make sure subsequent philosophical conclusions are coherent), and they latch onto the everyday, woolly meanings of the words "prove" and "truth" rather than their more precise counterparts in the theorems.
The exaggeration in the article I'm referring to is
> For example, it [Godel's incompleteness theorem] may mean that we can’t write an algorithm that can think like a dog...but perhaps we don’t need to.
To address this specific point, Godel's incompleteness theorems have little to say on this point. I think the assumed chain of logic is "algorithms rely on a computable listing of axioms" -> "any such listing must have truths that are unprovable in it" (the Godel step) -> "dogs perceive truth" -> "there are certain things dogs perceive that cannot ever be derived by an algorithm."
But leaving aside the plausibility of the non-Godelian steps in that chain, it's relying on a subtle simplification of what "truth" means that severely complicates the chain. Again, I would highly recommend thinking over the Conway Game of Life and inconsistent consistent theory problems. Those two problems demonstrate that "any such listing must have truths that are unprovable in it" is often an over-simplification (okay so I have a system S and the sentence "S is inconsistent" is consistent with S if S is consistent. What is true here?)
This is very similar (and indeed the similarity runs deeper due to connections between computability and logic) to analogous arguments with the halting problem/Rice's Theorem.
"Rice's theorem says that we cannot prove any abstract property about any program" -> "Static analysis attempts to prove abstract properties about programs" -> "Static analysis is impossible."
Or similar arguments that use the halting problem to argue that Strong AI is impossible.
More broadly though this sort of "anti-mechanistic" viewpoint appears all over the place in discussions about Godel's theorems to the point that the Stanford Encyclopedia has a whole section dedicated to it that plainly states "These Gödelian anti-mechanist arguments are, however, problematic, and there is wide consensus that they fail." https://plato.stanford.edu/entries/goedel-incompleteness/#Gd...
Having seen some of those arguments I am very inclined to agree with the encyclopedia here.
Thank you for your clarity. This discussion chain may have kicked of an existential crisis. I think you might have exposed me to a Lovecraftian horror. The article and your discussion have been, if not mind altering, very much mind expanding.
I can't say I've followed all of the nuance in your (argument?) comments. I do know I'm going to be chewing on this for a while. I've come to grips with horror of saccades, fascinating to learn about new ways my mind, uh, decides what is true.
In any case, I appreciate your long thoughtful responses. May not mean much to you, but for at least one reader, it means a lot to me.
I was using a very hand-wavy version of the word "truth" in my reply, trying to frame it in terms of how people talk about capital T Truth in informal philosophy rather than, say, model satisfability in model theory (which completely side-steps this question by assuming its resolution).
What I meant was to express a hypothetical anti-Platonist position. One version of mathematical Platonism is a belief that only some subset of "plausible" (i.e. those expressed by a consistent set of axioms) mathematical entities are real and it is the job of logician-philosophers to feel out just what those are. Of course they must appeal to extra-mathematical and extra-logical principles to do so, but that is after all why Platonism is a philosophy rather than a branch of mathematics. This is a simplified version of the philosophy that underlies some efforts to find "the one true set theory" that extends ZFC.
A hypothetical brand of anti-Platonism could argue that every consistent set of axioms is similarly real or unreal. There is no reason to choose one over the other. The only thing that has any objective reality to it is the mapping of those axioms to the real world. That you can do either correctly or incorrectly. Hence if we're talking about capital T Truth (i.e. the reality of the world) it is captured in that mapping, rather than the axioms themselves. Therefore, e.g. there's no point to trying to divine "the one true set theory." Just use whatever you find useful.
Regardless my overall point is that these are all matters of philosophy that cloud the particulars of what is going on with Godel's incompleteness theorems, often by exaggerating their consequences.