I largely agree but I don't see how ChatGPT hits the same use cases as a fine-tuned model. Prompts can only have 8K tokens so any "in-prompt" fine tuning would have to be pretty limited. I'm not certain that the lack of ChatGPT fine tuning will be a permanent limitation however.