Yeah also prompts should not be developed in abstract. Goal of a prompt is to activate the models internal respentations for it to best achieve the task. Without automated methods, this requires iteratively testing the models reaction to different input and trying to understand how it's interpreting the request and where it's falling down and then patching up those holes.
Need to verify if it even knows what you mean by nothing.
The only public prompt optimizer that I'm aware of now is DSPy, but it doesn't optimize your main prompt request, just some of the problem solving strategies the LLM is instructed to use, and your few shot learning examples. I wouldn't be surprised if there's a public general prompt optimizing agent by this time next year though.
Need to verify if it even knows what you mean by nothing.