Statements like this tell me your analysis is poisoned by misunderstandings:
"Why is this crazy? Well, it’s crazy that GPT’s quality and generalization can improve when you’re more vague – this is a quintessential marker of higher-order delegation / thinking."
No, there is no "higher-order thought" happening, or any at all actually. That's not how these models work.