Clear variable names and comments aren't a requirement at all.
It sounds to me like you have a philosophical problem with LLMs, which is something I don't think we can debate in good faith. I can just share my experience, which is that they are excellent tools for this kind of thing. Obvious caveats apply - only a fool would rely entirely on an LLM without giving it any thought of their own.
I don’t have a philosophical problem with LLMs. I have a problem with treating LLMs as something other than what they are: Predictive text generators. There’s no understanding beneath that which informs the generation, just compression techniques that arise as part of training. Thus I wouldn’t trust them for anything except churning out plausibly-structured text.