How confident can you be in this? Have you analyzed what exactly the billions of weights do?
I’ve got my opinions about what LLMs are and what they aren’t, but I don’t confidently claim that they must be such. There’s a lot of stuff in those weights.
Except the weights form complex relationships in order to reproduce very human usable responses. You can't look at weights and say it is doing this, or not doing that, unless you dive into a particular model.
Especially when you have billions of weights.
These models are finding general patterns that apply across all kinds of subjects. Patterns they aptly recognize and weave in all kinds of combinations. They are sensibly conversing on virtually every topic known to human kind. And can talk sensibly about any two topics, in conjunction. There is magic.
Not mystic magic, but we are going to learn a lot as we decode how their style of processing (after training) works. We don't have a good theory of how either LLM's or we "reason" in the intuitive sense. And yet they learn to do it. It will inspire improved and more efficient architectures.
Love your end. I have have spent four decades looking at real neurons, real synapses, and real axons and I can tell you with complete confidence that we are all just zombies.
Imagining we are really doing everything it does automatically including learning via algorithms we have only vague understandings of.
That is a strange thought. I could look at all my own brain's neurons, even with a heads up display showing all the activity clearly, and have no idea that it was me.
The closest I got to biological neurons was the toy but interesting problem of using a temporal pattern of neuron spikes to deduce the weights for arbitrarily connected (including recurrent) networks of simple linear integrate to threshold, spike and reset "neurons".
Algorithms can be nearly magical. In 1941 the world woke up to the “magic” of feedback and 10 years later cybernetics was the rage. We humans are just bags of protoplasm, but seems rather magical to me to be human.
I’ve got my opinions about what LLMs are and what they aren’t, but I don’t confidently claim that they must be such. There’s a lot of stuff in those weights.