Hacker News new | past | comments | ask | show | jobs | submit login

Why do you say they just memorize and interpret? I can teach GPT-2 new things, including new objects and their physical properties and it does a good job with that. That also means it has definitely not just regurgitated a matching sentence back to me.



when i see a new object for the first time, i MEMORIZE what i INTERPRET as its identifying traits, and ask someone who has already MEMORIZED what that object is to INTERPRET a concept with which i can associate those traits. the next time i encounter an object with those traits i can then recall the associations, then compose those trait-level interpretations into an interpretation of an object.

at a fundamental level that's all this is, compositions of associated memorizations and interpretations, which map to compositions of sentence parts the machine can regurgitate back to you


That's a bit of a reductionist way of looking at any sort of learning; I'm not sure how it's helpful to use memorize/interpret in terms of distinguishing what language models do compared to other types of learning.

I might be missing the point though?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: