Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If one has a reasonable understanding of 2 concepts that make up a larger system. And, such a system has little else in addition to those concepts, one is able to come up with that system by itself. Even though, it has never seen it, or their composition was never explained prior to that logical process.

The illusion happens when, clearly, the alleged reasoning behind how such a system comes to be is based on prior knowledge of the system as a whole. Meaning, its construction/source was within the training data.



That sounds like a good litmus test. Do you have a specific example you've tried?

My opinion is it isn't binary, rather it's a scale. Your example is a point on the scale higher than what it is now.

But perhaps that's too liberal a definition of "reasoning" , no idea.

We seem to move the goalposts on what constitutes human level intelligence as we discover the various capabilities exhibited in the animal kingdom. I wonder if it is/will be the same with AI




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: