Hacker News new | past | comments | ask | show | jobs | submit login

To add to the many good responses here so far: AI is such an overloaded term with many negative connotations that it's no longer used. It's also now specialized into separate components, as researchers realize that it's not "cheating" to study e.g. vision separately from natural language processing. Even humans cheat, by having separate portions of the brain specialize in different parts of "AI".

So some of the major AI fields are now known as:

  * machine learning - learning from data, aka applied statistics, aka how to "learn"
  * computer/machine vision - how to "see"
  * natural language processing - how to "read" and "write" and "translate"
  * speech recognition - how to "hear"
  * robotics - how to "do"
  * affective computing - how to "feel" and "act"
There are others as well, but as you can see from this simplistic breakdown, the specialities loosely mirror those in biology/medicine/neuroscience, and indeed there are also researchers who straddle the boundaries between the natural side and the computation side of things.

As a vision researcher, I can tell you that there's a huge amount of work being done, and progress being made, in our field -- both in academia and in industry. While it often doesn't make the news, we consider this a feature, not a bug.

For following progress in these fields, I have two comments:

1. There's no good resource I know of to follow all the subfields of AI. Instead, there are different sources for each field.

2. While others have recommended journals and conferences, I think it can be tough to read them if you're not already immersed in the field. So instead, I'd recommend starting with the wikipedia pages for each field, seeing the general list of topics, and then finding the appropriate papers if you're really interested. A good way to find important papers is by looking on google scholar for papers with lots of citations (insert usual disclaimers here about citations != quality of work, etc.)

I can get you started in computer vision with two very influential papers in the last decade that have also had a huge impact on industry:

P. Viola and M. Jones - Robust Real-time Object Detection http://research.microsoft.com/en-us/um/people/viola/Pubs/Det...

This paper revolutionized face detection, and is the basis for automatic face detection in most consumer cameras.

D. Lowe. Distinctive image features from scale-invariant keypoints http://www.cs.ubc.ca/~lowe/papers/ijcv04.pdf

This paper was the culmination of many years of work on detecting repeatable features in images and representing them in a consistently-findable way and is the basis for numerous object recognition algorithms, as well as those for stitching multiple photos together into panoramas and Microsoft's "PhotoSynth".




I don't think it's true that it's no longer used; I certainly still consider my primary research field to be "AI", and the main journals and conferences I publish in have "AI" in the title. I do agree that some subfields have split off more, especially machine-learning, robotics, and vision, but there's plenty of stuff that still goes on under the rubric of AI. Even the more split-off fields are still very present in big-tent AI conferences and journals, especially when presenting work on integrated systems.

Despite being mostly known as an ML researcher, Leslie Pack Kaelbling's AAAI-10 keynote slides make something of an argument for why AI is important as well: http://people.csail.mit.edu/lpk/AAAI10LPK.pdf


The skills and techniques required are very very different between the fields though; my PhD was in natural language processing but I don't have the first clue about machine vision, for example.

Hell, I was working on parsing biomedical text -- meaning I don't really have much of a clue about speech processing, machine translation, sentiment analysis, question answering or most of the other subfields of linguistic computing.

Of course there's plenty of room for cross-fertilization, but that's true across comp sci as a whole. Thinking of all of these things as being part of some coherent topic called AI is more of a historical legacy than a useful category.


I guess taking a mostly integrated-systems, applications-focused view, I see AI as a useful organizing concept for what to build, what the issues are in building it, etc. For example, I'm not sure what field Alan Newell's "knowledge level" talk would go in if not into AI.

I do agree there are lots of areas of research that are maybe more "algorithms" than "AI" (e.g. improving SAT-solving), but I disagree that that sort of specialization is the only way to research. From my perspective, those areas of algorithms provide the raw-material research that can be used to build AI systems. Even building AI systems is often specialized to some extent as well, but I think it's useful to have a semi-coherent body of knowledge and shared community around "AI" when doing so, rather than just the domain-specific algorithms and approaches.


... and as someone whose research involves both SAT and statistical learning, I'd agree: putting a variety of specialties under the AI umbrella can make it easier to connect different areas in useful ways.


Seconding the recommendation for David Lowe's paper. It's easy to follow and seriously impacted image detection. If you have MATLAB you can also run his demo: http://www.cs.ubc.ca/~lowe/keypoints/

It's very interesting to see.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: