Maybe I'm stupid, but does he mean that as long as the task at hand is solved it doesn't matter how we categorize it. In the submarine case it would be "move through water", for example. Or is it deeper than that?
To be fair, 10+ years ago this conversation definitely would have been pretty silly. Maybe about as interesting as asking "is there other life in the universe".
No one knows the answer, it's an incredibly over discussed topic, and we won't know for sure for many years.
I think those points still apply to AI intelligence today. However, the power of today's AI greatly outstrips anything Djikstra would have seen in his day.
The point isn't about whether it is unknowable or not - rather does having the answer have any practical value - ie does the attribution of 'thinking' add any value to understanding a program?
The improvement of AI lately doesn't invalidate his point though. I'm sure submarine technology has similarly improved but it's still irrelevant whether or not a submarine can be said to 'swim' not.
Is there something here about the terms being sloppy and unscientific, making the answer somewhat useless? Whatever "swimming" or "thinking" might be, it's not something clearly defined.