I have run an LLM at home, up to 70B models. While I agree with some of what you said, I don't really know how it prevents LLMs from being useful.
Yes, LLMs currently don't learn. This is also true for humans who cannot form new memories. A human who cannot learn anything new besides the contents of their short-term memory is at a huge disadvantage, but (hopefully) no less capable if given access to a scratchpad and more thinking time to compensate.
As for the part that they are deterministic lacking noise... Why would you not provide it with noise? Generating randomness is something that modern machines are quite good at, so there isn't much of a reason not to inject it at inference time.
Yes, LLMs currently don't learn. This is also true for humans who cannot form new memories. A human who cannot learn anything new besides the contents of their short-term memory is at a huge disadvantage, but (hopefully) no less capable if given access to a scratchpad and more thinking time to compensate.
As for the part that they are deterministic lacking noise... Why would you not provide it with noise? Generating randomness is something that modern machines are quite good at, so there isn't much of a reason not to inject it at inference time.