This is also the case if taught by any educator who happens to trust the source they looked up as well. The internet, text books, and even scientific articles can all be factually incorrect.
GNNs (for which LLMs are a subclass of) have a potential to be optimized in such a way that all the knowledge contained within them remains as parsimonious as possible. This is not the case for a human reading some internet article for which they have not gained extensive context within the field.
There are plenty of people that strongly believe in strange ideas that were taught to them by some 4th grade teacher that was never corrected over their life.
While you're statements are correct in this miniscule snapshot of time, it's exceedingly short-sighted to assert that language modeling is to be avoided due to some issues that exists this month, and disregard the clear future of improvements that will come very soon.
GNNs (for which LLMs are a subclass of) have a potential to be optimized in such a way that all the knowledge contained within them remains as parsimonious as possible. This is not the case for a human reading some internet article for which they have not gained extensive context within the field.
There are plenty of people that strongly believe in strange ideas that were taught to them by some 4th grade teacher that was never corrected over their life.
While you're statements are correct in this miniscule snapshot of time, it's exceedingly short-sighted to assert that language modeling is to be avoided due to some issues that exists this month, and disregard the clear future of improvements that will come very soon.