Even as LLMs get better over time at understanding ill-formed prompts, I expect that API prices will still continue to depend on the number of tokens used. That’s an incentive to minimize tokens, so “prompt engineering” might stick around, even if just for cost optimization.
Do you not expect a trend of token prices decreasing over time? There will be business using a less cutting edge model and the difference of how many words a prompt is won't be a big contributing factor to the total spend of the business.
Good point. On the other hand, for every business that sticks to a less advanced model, there might be a competitor around the corner running the cutting-edge one in an attempt to serve customers better.