Maybe the term "engineer" did the concept a disservice, but prompt engineering has a lot of parallels to the field of UX.
Currently there's a lot intuition involved so it can come across as made up, but there are novel concepts which meaningfully affect the end quality of what you make, and it takes time to learn and/or discover them.
As time goes I expect our understanding of what underlies "good prompts" will start to bridge the gap from intuition to science much like how UX bridged into neuroscience and psychology. If you understand things like attention and logits that's already kind of happening: you can use that knowledge to identify gaps in the abilities of LLMs and start to bridge those gaps.
-
People are convinced that future LLMs will obsolete prompt engineering. To me that'd be like people from the 90s thinking computers are going to obsolete UX because more powerful computers will be better at making user interfaces, and in turn anyone will be able to do it.
In some ways they'd be right: Today you don't need a UX expert at PARC to integrate a WYSIWYG interface into your product. Computers got so powerful that in milliseconds we can download libraries that implement the interface and render it across any form factor you can imagine. So now a WYSIWYG on your contact form is nothing.
But as computers got more powerful they could do new things, so UX advanced onto improving how we interface with those new things. Things like the Vision Pro will unlock new areas of UX based on novel capabilities they posses.
I think people are making a similar mistake with LLMs: they're focused on this idea that we'll just do the current things but better with more powerful models. But the more powerful models will be something we can "prompt engineer" into usecases we haven't even considered yet. (I also built notionsmith.ai and I'd argue it fits into that bucket a bit)
Currently there's a lot intuition involved so it can come across as made up, but there are novel concepts which meaningfully affect the end quality of what you make, and it takes time to learn and/or discover them.
As time goes I expect our understanding of what underlies "good prompts" will start to bridge the gap from intuition to science much like how UX bridged into neuroscience and psychology. If you understand things like attention and logits that's already kind of happening: you can use that knowledge to identify gaps in the abilities of LLMs and start to bridge those gaps.
-
People are convinced that future LLMs will obsolete prompt engineering. To me that'd be like people from the 90s thinking computers are going to obsolete UX because more powerful computers will be better at making user interfaces, and in turn anyone will be able to do it.
In some ways they'd be right: Today you don't need a UX expert at PARC to integrate a WYSIWYG interface into your product. Computers got so powerful that in milliseconds we can download libraries that implement the interface and render it across any form factor you can imagine. So now a WYSIWYG on your contact form is nothing.
But as computers got more powerful they could do new things, so UX advanced onto improving how we interface with those new things. Things like the Vision Pro will unlock new areas of UX based on novel capabilities they posses.
I think people are making a similar mistake with LLMs: they're focused on this idea that we'll just do the current things but better with more powerful models. But the more powerful models will be something we can "prompt engineer" into usecases we haven't even considered yet. (I also built notionsmith.ai and I'd argue it fits into that bucket a bit)