> IDK, I'm not convinced by all that I've seen, that GPT is capable of that higher-order thinking. I fear it requires a degree of epistemology that GPT fundamentally doesn't possess as a stochastic token-guesser. It never pushes back against a request, or asks if you really intend another question by your first question. It never tries to read through your requirements to grasp the underlying problem that's prompting them.
It can if prompted appropriately. If you are just using the default ChatGPT interface and system prompt, it doesn't, but then, it is intended to be compliant outside of its safety limits in that application. (I am not arguing it has the analytical capacity to be suited for for the role being discussed, but the particular complaint about excessive compliance is a matter of prompting, not model capacity.)
It can if prompted appropriately. If you are just using the default ChatGPT interface and system prompt, it doesn't, but then, it is intended to be compliant outside of its safety limits in that application. (I am not arguing it has the analytical capacity to be suited for for the role being discussed, but the particular complaint about excessive compliance is a matter of prompting, not model capacity.)