> If not, can somebody recommend resources related to learning how to accomplish non-trivial code generation?
Learn how to think ontologically and break down your requests first by what you're TRULY looking for, and then understand what parts would need to be defined in order to build that system -- that "whole". Here's some guides:
> Learn how to think ontologically and break down your requests first by what you're TRULY looking for, and then understand what parts would need to be defined in order to build that system -- that "whole".
Since I’m dealing with models rather than other engineers should I expect the process of breaking down the problem to be dramatically different from that of writing design documents or API specs? I rarely have difficulty prompting (or creating useful system prompts for) models when chatting or doing RAG work with plain English docs but once I try to get coherent code from a model things fall apart pretty quickly.
That's actually a solid question! You can probably ask GPT to AI-optimize a standard technical spec you have and to "ask clarifying questions in order to optimize for the best output". I've done that several times with past specs I've had and it was quite a fruitful process!
Great idea. I’ve used that tactic in the past for non-code related prompts; not sure why I didn’t think of trying it with my code-generation prompting. I’ll give it a shot.
the "ask me what info you're missing" strategy works very well, since the AI will usually start the task every time to avoid false positives of asking a question. and then it also asks very good questions, I then realize were necessary info
Learn how to think ontologically and break down your requests first by what you're TRULY looking for, and then understand what parts would need to be defined in order to build that system -- that "whole". Here's some guides:
1.) https://platform.openai.com/docs/guides/prompt-engineering 2.) https://www.promptingguide.ai/