The point of the original YHIW, and in this context, is that providing specific guidance is more useful than the vague victim-blaming criticism.
I have experimented with prompts, and in fact the case in point was one of those experiments. The highly non-deterministic and time-variable nature of LLM AIs makes any kind of prompt engineering at best a black art.
If you'd at least offer a starting point there'd be some basis to assessing the positive information we're discussing here, rather than do multiple rounds on why victim-blaming is generally unhelpful. And if those starting points are as easy to surface as you're suggesting, that would be a low ask on you.
In its barest implementation it's an automatically assembled program that spits out text. It's a highly intuitive free association engine.
The reasoning models kind of self prompt themselves into doing more than that, but that's the short version, and you don't seem interested in the long version.
So just give it what it needs to generate the text you need it to generate in one go. Not multiple messages like you're writing to a particularly annoying friend who keeps talking over you: it can't talk. It'll just keep trying to respond to the request with more free association.
So if you're chatting with it you're just polluting the context with noise. This can be good - say if you're trying to get it to hallucinate or jailbreak it out of its shell, but it's a very advanced use case.
If you just want results you make one prompt per conversation that makes it free associate the answer you need from it. A basic prompt that will work is "Write a play about a blue dog that had an encounter with an evil hydrant" or "write an ansible playbook that creates a read only file with the content 'i can't prompt' in the configuration directory of servers with names that match the string Mary"
And yes you can then tell it "no, I meant just on servers that start with Mary, it shouldn't match Rosemary" and it'll still be kind of ok and respond with corrected code. But it'll usually be less good than it it had done it from the beginning, because now it's also getting context from the previous code and if the change is fundamental enough it won't do a good job of restarting the thinking process in a sane way.
I've had enough LLM experience that much of this is obvious to me / what I've worked out.
Some questions are more complex than a simple ask/response, and that's where I've encountered issues.
Canonical example is coming up with suggestions given highly specific tastes and a large set of works already experienced (positively and/or negatively). LLM jumping the gun with irrelevant suggestions in that case is just plain annoying.
Karpathy has a new intro series that I think puts one into the correct mindset.
As others have said, it's not AGI yet, so holding it right is, in fact, critical.