Hacker News new | past | comments | ask | show | jobs | submit login

I think if you could somehow examine the output of your language model in isolation, you would find it also doesn't "comprehend". Comprehension is what we assign to our higher level cognitive models. It is difficult to introspectively isolate your own language center, though.

I took a stab at an exercise that may allow you to witness this within your own mind here: https://www.jerf.org/iri/post/2023/streampocalypse-and-first... Don't know if it works for anyone but me, of course, but it's at least an attempt at it.





Yes, you are correct. Oops. Too late to correct.


The language centres of our brain don't know what a dog is, but they can take the word "dog" and express it on a level that the logic centres of our brain can use. I don't know if "comprehending" is the right word, exactly, but it's transforming information from one medium to another in preparation for semantic and logical analysis.

GPT doesn't do that. What it does is related to meaning, but unlike the language comprehension parts of our brains, which are (presumably) stepping stones between language and reason, GPT doesn't connect to any reasoning thing. It can't. It's not built to interface with anything like that. It just reproduces patterns in language rather than extracting semantic meaning from them in a way that another system can use. I'm not saying that's more or less complicated—just different.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: