I guess the issue is that programmers work with a really big context window, and need to reason at multiple levels of abstraction depending on the problem.
A great idea to solve a problem at one level of abstraction / context might be a terrible "strategic" idea at a higher level of abstraction. This is what separates the "junior" engineers from "senior" engineers, speaking very loosely.
IDK, I'm not convinced by all that I've seen, that GPT is capable of that higher-order thinking. I fear it requires a degree of epistemology that GPT fundamentally doesn't possess as a stochastic token-guesser. It never pushes back against a request, or asks if you really intend another question by your first question. It never tries to read through your requirements to grasp the underlying problem that's prompting them.
Maybe some combination of static tools, senior caretakers and prompt hackery can get us to a solution that maintains code effectively. But I don't think you can throw out the senior caretakers, their verification involvement is really necessary. And I don't know how conducive this environment would be to developing the next generation of "senior caretakers".
> IDK, I'm not convinced by all that I've seen, that GPT is capable of that higher-order thinking. I fear it requires a degree of epistemology that GPT fundamentally doesn't possess as a stochastic token-guesser. It never pushes back against a request, or asks if you really intend another question by your first question. It never tries to read through your requirements to grasp the underlying problem that's prompting them.
It can if prompted appropriately. If you are just using the default ChatGPT interface and system prompt, it doesn't, but then, it is intended to be compliant outside of its safety limits in that application. (I am not arguing it has the analytical capacity to be suited for for the role being discussed, but the particular complaint about excessive compliance is a matter of prompting, not model capacity.)
A great idea to solve a problem at one level of abstraction / context might be a terrible "strategic" idea at a higher level of abstraction. This is what separates the "junior" engineers from "senior" engineers, speaking very loosely.
IDK, I'm not convinced by all that I've seen, that GPT is capable of that higher-order thinking. I fear it requires a degree of epistemology that GPT fundamentally doesn't possess as a stochastic token-guesser. It never pushes back against a request, or asks if you really intend another question by your first question. It never tries to read through your requirements to grasp the underlying problem that's prompting them.
Maybe some combination of static tools, senior caretakers and prompt hackery can get us to a solution that maintains code effectively. But I don't think you can throw out the senior caretakers, their verification involvement is really necessary. And I don't know how conducive this environment would be to developing the next generation of "senior caretakers".