Hacker News new | past | comments | ask | show | jobs | submit login

Au contraire, the whole history of AI is one of moving goal posts. One professor I worked with quipped that a field is called AI only so long as it remains unsolved.

Logic arguments and geometric analogies were once considered the epitome of human thinking. They were the first to fall. Computer vision, expert systems, complex robotic systems, and automated planning and scheduling were all Turing-hard problems at some point. Even Turing thought that Chess was a domain which required human intellect to master, until DeepMind. Then it was assumed Go would be different. Even in the realm of chat bots, Eliza successfully passed the Turing test when it was first released. Most people who interacted with it could not believe that there was a simple algorithm underlying its behavior.




> One professor I worked with quipped that a field is called AI only so long as it remains unsolved.

Not just one professor you worked with, this has been a common observation across the field for decades.

But the deeper debate about this is absolutely not about moving goal posts, it is about research revealing that our intuitions were (and thus likely still are) wrong. People thought that very conscious, high-cognition tasks like playing chess likely represented the high water mark of "intelligence". They turned out to be wrong. Ditto for other similar tasks.

There have been people in the AI field as long as I've been reading pop-sci articles and books about who have cautioned about these sorts of beliefs, but they've generally been ignored in favor of "<new approach> will get us to AGI!". It didn't happen for "expert systems", it didn't happen for the first round of neural nets, it didn't happen for the game playing systems, it didn't happen for the schedulers and route creators.

The critical thing that has been absent from all the high-achieving approaches to AI (or some subset of it) thus far is that the systems do not have a generalized capacity for learning (both cognitive learning and proprioceptive learning). We've been able to build systems that are extremely good at a task; we have failed (thus far) at building systems which start out with limited abilities and grow (exponentially, if you want to compare it with humans and other animals) from there. Some left-field AI folks would also say that the lack of embodiment hampers progress towards AGI, because actual human/animal intelligence is almost always situated in a physical context, and that for humans in particular, we manipulate that context ahead of time to alter the cognitive demands we will face.

Also, most people do not accept that Eliza passed the Turing test. The program was a good model of a Rogerian psychotherapist, but could not engage in generalized conversation (without sounding like a relentlessly monofocal Rogerian psychotherapist, to a degree that was obviously non-human). The program did "fool" people into feeling that they were talking to a person, but in a highly constrained context, which violates the premise of the Turing test.

Anyway, as is clear, I don't think that we've moved the goal posts. It's just that some hyperactive boys (and they've nearly all been boys) got over-excited about computer systems capable doing frontal lobe tasks and forgot about the overall goal (which might be OK, if they did not make such outlandish claims).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: