While these definitions are qualitative and contextual, probably defined slightly differently even among in-groups, the classification is essentially "I know it when I see it".
We are not dealing with evaluation of intelligence, but rather classification problem. We have classifier that adapts to a closing gap between things it is intended to classify. Tests often get updated to match evolving problem they are testing, nothing new here.
>the classification is essentially "I know it when I see it".
I already see it when it comes to the latest version of chatGPT. It seems intelligent to me. Does this mean it is? It also seems conscious ("I am a large language model"). Does that mean it is?
The question is not whether you consider a thing intelligent, but rather whether you can tell meatbag intelligence and electrified sand intelligence apart.
You seem to get Turing test backwards. Turing test does not classify entities into intelligent and non-intelligent, but rather takes preexisting ontological classification of natural and artificial intelligence and tries to correctly label each.
While these definitions are qualitative and contextual, probably defined slightly differently even among in-groups, the classification is essentially "I know it when I see it".
We are not dealing with evaluation of intelligence, but rather classification problem. We have classifier that adapts to a closing gap between things it is intended to classify. Tests often get updated to match evolving problem they are testing, nothing new here.