I'm not saying that those things don't exist, but that if you have no tools to measure them, then no practical consequences can follow from you trying to measure them.
I don't see how the McNamara fallacy squares with the empirical scientific process. Perhaps we both misunderstood it, and instead it is about focusing on the wrong metrics.
I got an answer, and it does not take orbital mechanics into account. Am I correct that you assert it that it's incapable of understanding in your sense?
If you ask it about orbital mechanics, it will answer correctly. If you provide it a problem that does not use language associated with orbital mechanics, it will answer without consideration of orbital mechanics. This strongly suggests it is not understanding either the question, or orbital mechanics.
All evidence I've seen is consistent with it understanding neither the question nor its background. If the text seems to invoke orbital mechanics, it will output discussion-of-orbital-mechanics-y text – and if it doesn't, it won't. This is what you would predict from studying the model's architecture, and it's what is observed. (Many people have a different perception, but most of the time I see people raving about how intelligent the system is, they post screenshots, where you can see them feeding it what they're praising a few lines earlier.)
I have yet to see anything that falsifies my "GPT models don't understand anything" hypothesis. (Nothing that stood up to scrutiny, either… I've tricked myself with it a few times.)
> Am I correct that you assert it that it's incapable of understanding in your sense?
I'm not saying it's incapable of understanding. Just that it doesn't understand: I'm neither knowledgeable nor arrogant enough to assert that it could never. I don't think it's likely, though, and I can't imagine a way it could gain that ability from the training process.
I don't see how the McNamara fallacy squares with the empirical scientific process. Perhaps we both misunderstood it, and instead it is about focusing on the wrong metrics.
I got an answer, and it does not take orbital mechanics into account. Am I correct that you assert it that it's incapable of understanding in your sense?