Hacker News new | past | comments | ask | show | jobs | submit login

This accountable argument is stone-old. You'll find it in an intro AI book. Of course only when implemented in lisp, so it needs be an older book.

It's the typical argument against neural nets, because they cannot explain their chain of reasoning, and so you are not able to train it to right way, or do not train into the wrong direction. when something goes wrong you got a problem.

Old AI had the same problem, that's why they added the chain to the backtracking to give better answers, and you are able to introspect answers.




Maybe true, but "stone-old" ideas don't mean "bad ideas". Neural networks were "stone old" until all this big data stuff went crazy and now suddenly they're on the frontpage of hacker news all the time again and people think they're the New Hotness (TM). Similar with many long-forgotten functional programming techniques which are being talked about as new stuff, much of it is refinements of old ideas, but finally hitting the mainstream.

The AI Winter is thawing... but whose lap will it fall into after the defrost? Big data mining organizations, or everyday hackers, or....?


My dream would be if Good Old-Fashioned AI (the symbolic, explainable kind Sussman is interested in) were to have the sudden redemption that neural nets had.

It was easier for neural nets, though, because they were close to a previously successful AI mechanism (machine learning with logistic regression), it's just that we had spent decades talking about them with different words for no good reason. There's a much larger gulf between GOFAI and what we do now.


Pattern matching has unbeatable performance benefits, at least for our current computers.

If you go deep in numerical calculus, you'll see that our computers are must better fit to work with continuous smooth functions than they are for discrete noise-like data. So, all the power goes to the people that turn their knowledge into a smaller set of interpolated curves. (And yes, I think that's very counter intuitive.)

Anyway, I'm not convinced this is a fundamental property of CS. It sounds at least possible that different architectures could have different constraints, and make symbolic AI viable. But it those would need to be very different architectures, not based on the Von Neuman or Turing's ideas at all.


When I mean "stone old" I mean of corse better than todays ideas. This should be clear from the context. Not everything which is fast is also good.

You can also come up with some kind Greenspun's tenth rule applied to neural net's:

"Any sufficiently complicated modern AI program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp."

In this case not even that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: