Hacker News new | past | comments | ask | show | jobs | submit login

Yeah also prompts should not be developed in abstract. Goal of a prompt is to activate the models internal respentations for it to best achieve the task. Without automated methods, this requires iteratively testing the models reaction to different input and trying to understand how it's interpreting the request and where it's falling down and then patching up those holes.

Need to verify if it even knows what you mean by nothing.




In the end, it comes down to a task similar to people management where giving clear and simple instructions is the best.


Which automated method do you use?


The only public prompt optimizer that I'm aware of now is DSPy, but it doesn't optimize your main prompt request, just some of the problem solving strategies the LLM is instructed to use, and your few shot learning examples. I wouldn't be surprised if there's a public general prompt optimizing agent by this time next year though.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: