> The counter point is how LLMs can't find a missing line in a poem when they are given the original.
True, but describing a limitation of the tech can't be used to make the sort of large dismissals we see people make wrt LLMs.
The human brain has all sorts of limitations like horrible memory (super confident about wrong details) and catastrophic susceptibility to logical fallacies.
Have you not had this issue with LLMs? Because I have. Even with the latest models.
I think someone upthread was making an attempt at
> describing a limitation of the tech
but you keep swatting them down. I didn’t see their comments as a wholesale dismissal of AI. They just said they aren’t great at sufficiently complex tasks. That’s my experience as well. You’re just disagreeing on what “sufficiently” and “complex” mean, exactly.
The counter point is how LLMs can't find a missing line in a poem when they are given the original.
PAC learning is basically existential quantification...has the same limits too.
But being a tool to find a needle is not the same as finding all or even reliability finding a specific needle.
Being being a general programming agent requires much more than just finding a needle.