Hacker News new | past | comments | ask | show | jobs | submit login

You are mixing up knowledge and reasoning skills. And I've definitely met high schoolers who were smarter than PhD student colleagues, so even there your point falls apart. When you mangle together all forms of intelligence without any straight definition, you'll never get any meaningful answers. For example, is your friend not intelligent because he's not a world-elite level chess player? Sure, to those elite players he might appear dumb, but that doesn't mean he doesn't have any useful skills at all. That's also what Turing realised back then. You couldn't test for such an ambiguous thing as "intelligence" per se, but you can test for practical real life applications of it. Turing was also convinced that all the arguments (many of which you see repeated over and over on HN) against computers being "intelligent" were fundamentally flawed. He thought that the idea that machines couldn't think like humans was more a flaw in our understanding of our own mind than a technological problem. Without any meaningful definition of true intelligence, we might have to live with the fact that the answer to the question "Is this thing intelligent?" must come from the pure outcome of practical tests like Turing's and not from dogmatic beliefs about how humans might have solved the test differently.



I choose to disagree, mostly semantically.

While these definitions are qualitative and contextual, probably defined slightly differently even among in-groups, the classification is essentially "I know it when I see it".

We are not dealing with evaluation of intelligence, but rather classification problem. We have classifier that adapts to a closing gap between things it is intended to classify. Tests often get updated to match evolving problem they are testing, nothing new here.


>the classification is essentially "I know it when I see it".

I already see it when it comes to the latest version of chatGPT. It seems intelligent to me. Does this mean it is? It also seems conscious ("I am a large language model"). Does that mean it is?


The question is not whether you consider a thing intelligent, but rather whether you can tell meatbag intelligence and electrified sand intelligence apart.

You seem to get Turing test backwards. Turing test does not classify entities into intelligent and non-intelligent, but rather takes preexisting ontological classification of natural and artificial intelligence and tries to correctly label each.


This is not a question of semantics. If anything, it's a question of a human superiority complex. That's what Turing was hinting at.


Can you list some sources or quotes? I'm not familiar with the parts you're referencing, it seems like you're putting a lot of words in his mouth.


I think you’re overthinking things here.

Tests need to grow with the problem they’re trying to test.

This is as true for software engineering as it is for any other domain.

It doesn’t mean the goal posts are moving. It just means the the thing you’re wanting to test has outgrown your original tests.

This is why you don’t ask PhD students to sit the 11+.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: