Hacker News new | past | comments | ask | show | jobs | submit login

Sure, so what are the specific concepts it doesn’t understand?

I don’t think its ability to program in an obscure program is really a great test. That’s a matter of syntax more than semantics, no?

Novel conceptual blends are where it excels. Yes, it needs to understand the concepts involved to blend them —but humans need that too.




I think you missed my point. It's understandable that it doesn't know how to program in a moderately obscure language. But the model doesn't understand that it doesn't. The specific concepts it doesn't understand are understanding what it is, its limitations, and what it's being asked to do.

It doesn't seem to have any "meta" understanding. It's subconscious thought only.

If I asked a human to program in a language they didn't understand, they'd say they couldn't, or they'd ask for further instructions, or some reference to the documentation, or they'd suggest asking someone else to do it, or they'd eventually figure out how to write in the language by experimenting on small programs and gradually writing more complex ones.

GPT4 and friends "just" take an input that seems like it could plausibly answer the request. If it gets it wrong then it just has another go using the same generative technique as before with whatever extra direction the human decides to give it. It doesn't think about the problem.

("just" doing a lot of work in the above sentence: what it does is seriously impressive! But it still seems to be well behind humans in capability.)


I agree it has very minimal metacognition. That’s partially addressed through prompt chaining—ie, having it reflect critically on its own reasoning. But I agree that it lacks self-awareness.

I think artifacts can easily reflect the understanding of the designer (Socrates claims an etymology of Technology from Echo-Nous [1])

But for an artifact to understand — this is entirely dependent on how you operationalize and measure it. Same as with people—we don’t expect people to understand things unless we assess them.

And, obviously we need to assess the understanding of machines. It is vitally important to have an assessment of how well it performs on different evals of understanding in different domains.

But I have a really interesting supposition about AI understanding that involves it’s ability to access the Platonic world of mathematical forms.

I recently read a popular 2016 article on the philosophy of scientific progress. They define scientific progress as increased understanding — and call it the “noetic account.” [2] Thats a bit of theoretical support for the idea that human understanding consists of our ability to conceptualize the world in terms of the Platonic forms.

Plato ftw!

[1] see his dialogue Cratylus

[2] Dellsén, F. (2016). Scientific progress: Knowledge versus understanding. Studies in History and Philosophy of Science Part A, 56, 72-83.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: