> I still find it pretty wild that they would parameterize it by speaking to it so plainly
Not my area of expertise, but they probably fine tuned it so that it can be parametrized this way.
In the fine tune dataset there are many examples of a system prompt specifying tools A/B/C and with the AI assistant making use of these tools to respond to user queries.
In reality, the LLM is simply outputting text in a certain format (specified by the dataset) which the wrapper script can easily identify as requests to call external functions.
Not my area of expertise, but they probably fine tuned it so that it can be parametrized this way.
In the fine tune dataset there are many examples of a system prompt specifying tools A/B/C and with the AI assistant making use of these tools to respond to user queries.
Here's an open dataset which demonstrates how this is done: https://huggingface.co/datasets/togethercomputer/glaive-func.... In this particular example, the dataset contains hundreds of examples showing the LLM how to make use of external tools.
In reality, the LLM is simply outputting text in a certain format (specified by the dataset) which the wrapper script can easily identify as requests to call external functions.