Hacker News new | past | comments | ask | show | jobs | submit login

I don't agree that the higher level symbolic reasoning is necessarily harder, nor that the lower level is simply engineering. The problems at the low-level are still very tough, and require scientific (and not just hacky- or engineering-) approaches.

As for the "single domain" issue, this was a big red herring in old-style AI (before the so-called AI winter) -- basically, it turns out that the different branches of what used to be called AI (e.g., vision, machine learning, NLP, speech, etc.) have to use quite different techniques.

"General" AI systems such as logic, game players (and yes, I know it's not just about games), etc., can be used in various domains, but in general they're not very applicable to most real-world problems in the different domains.

For example, I work in computer vision, and while we certainly use a lot of probabilistic analysis that can be broadly applicable to other domains, a lot of the progress in recent years has been using techniques that aren't really transferable (e.g., SIFT or SLAM).




General AI is not applicable in real life yet. Probabilistic reasoning can be subsumed into a logic system. For example, a human(David Lowe) has originated SIFT. A reasoning system should in principle be able to derive SIFT. We are not there yet but that should be the goal.

The problem with singe domain systems is none are even capable of robust inductive transfer (http://en.wikipedia.org/wiki/Inductive_transfer). Some reasoning problems are mathematically hard problems (http://en.wikipedia.org/wiki/List_of_undecidable_problems). There is no algorithm which even in principle can solve them. Among other things, you need infinite processing speed to do so. No amount of science can help here as it is in the logical domain.


Your statement about inductive transfer is false. There are many vision systems, for example, that use transfer learning (as it is known in our community) to use knowledge gained from one task in another. Moreover, many of these approaches are specific to vision -- they can't easily be applied in other domains.

The rest of your comments about reasoning systems are either part of theoretical computer science (e.g., most of that list of undecidable problems), or in the realm of philosophy/metaphysics/80s-style AI. The former is an interesting area, but much closer to math than most other areas of CS. I call the latter philosophy/metaphysics because these are often questions that are interesting from a purely academic viewpoint, but are utterly useless for trying to make progress on real tasks. This was the big problem with 80s style AI: people hoped that they could build general systems which could "reason" about how to derive algorithms to do particular tasks. Knowledge bases were supposed to be steps in this direction.

What the community learned is that this is not a valid approach, in part because it doesn't take into account the enormous amount of data people have access to from birth to adulthood (not to mention the fact that we can interact with our environment and see the results of our actions). There doesn't seem to be a good way to provide a computer system this kind of information.

There are also strong biological arguments against this idea. For example, a large fraction of human brain cells are exclusively devoted to processing visual information. This is in addition to the significant amount of visual processing done by our eyes and the optic nerve. There are similar systems in the brain devoted to speech processing and language, etc.

All of this suggests that a general reasoning system cannot hope to solve the challenging problems in these different domains.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: