Is this meant to be how the ChatGPT designers/operators instruct ChatGPT to operate? I guess I shouldn't be surprised if that's the case, but I still find it pretty wild that they would parameterize it by speaking to it so plainly. They even say "please".
> I still find it pretty wild that they would parameterize it by speaking to it so plainly
Not my area of expertise, but they probably fine tuned it so that it can be parametrized this way.
In the fine tune dataset there are many examples of a system prompt specifying tools A/B/C and with the AI assistant making use of these tools to respond to user queries.
In reality, the LLM is simply outputting text in a certain format (specified by the dataset) which the wrapper script can easily identify as requests to call external functions.
If you want to go the stochastic parrot route (which i dont fully biy) then because statistically speaking a request paired with please is more likely to be met, then the same is true for requests passed to a LLM. They really do tend to respond better when you use your manners.
There's a certain logic to it, if I'm understanding how it works correctly. The training data is real interactions online. People tend to be more helpful when they're asked politely. It's no stretch that the model would act similarly.
From my experience with 3.5 I can confirm that saying please or reasoning really helps to get whatever results you want. Especially if you want to manifest 'rules'