Hacker News new | past | comments | ask | show | jobs | submit login

I remember getting a lot of flak for saying a purely statistical framework is not going to achieve human level intelligence, but I still firmly believe that.

I also believe the path forward is research in knowledge representation, and even now when I search for it, I can barely find anything interesting happening in the field. ML has gotten so much interest and hype because it’s produced fast practical results, but I think it’s going to reach a standstill without something fundamentally new.




I tend to agree, and it’s weird but there are probably lots of actual ML practitioners that have never even heard of the neat vs scruffy debate. Naturally most that have heard of it will feel that the issue is completely resolved already in their favor. On the whole not a very open minded climate.

Credit where it’s due for the wild success of fancy stats, but we should stay interested in hybrid systems with more emphasis on logic, symbols, graphs, interactions and whatever other data structures seem rich and expressive.

Call me old school but frankly I prefer the society-of-mind flavor of system should ultimately be in charge of things like driving cars, running court proceedings, optimizing cities or whole economies. Let it use fancy stats as components and subsystems, sure.. but let it produce coherent arguments or critiques that can actually be understood and summarized and debugged.


You make a very interesting point. Human understanding and logic can be very rationally explained. A judge can for example give a very though response of exactly why they made their verdict. I think that would be an excellent benchmark for AI.

This seems rather impossible when your understanding of the world is connection of billions of messy and uncertain parameters. But perhaps this is the first step? Maybe we can take the neural nets trained by Ml and create constructions on top of it.


I think this is effectively provable from extraordinarily plausible premises.

    1. We want to infer A from Q. 
    2. Most A we dont know, or have no data for, or the data is *in the future*.
    3. Most Q we cannot conceptualise accurately 
        since we have no explanatory theory in which to phrase it or to provide measures of it. 
    4. All statistical approaches require knowing frequencies of (Q, A) pairs (by def.)
    5. In the cases where there is a unique objective frequency of (Q,A) we often cannot know it (2, 3)
    6. In most cases there is no unique objective frequency 
        (eg., there is no single animal any given photograph corresponds to, 
        nor any objective frequency of such association).
So, conclusion:

In most cases the statistical approach either necessarily fails (its about future data; its about non-objective associations; it's impossible to measure or obtain objective frequences); OR if it doesnt necessarily fail, fails in practice (it is to expensive, or otherwise impossible, to obtain the authoritative QA-frequency).

Now, of course, if your grift is generating nice cartoons or stealing cheap copy from ebooks you can convince the audience in the magical power of associating text tokens. This, of course, should be ignored when addressing the bigger methodological questions.


I do agree here!

Bit of a tangent from the thread but what have been the most valuable advances in knowledge representation in the last 20 years? Any articles you could share would be lovely!


I'm not expert and I don't know anything unfortunately. It is something I have spent countless hours walking around my room and pondering myself though for the last 3-4 years. I think I have some interesting ideas and I would love to get a PhD studying it, if I ever get enough financial independence that I don't have to worry about money.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: