I've been thinking about this.. most programmers use frameworks/libraries, e.g. Spring/Hibernate in Java or React in JavaScript. Is there a way to train LLM to "specialize" in our frameworks/libraries of choice? I assume it would result in faster/smaller/more accurate result?
Things like Falcon 40B are trainable with something like a LoRA technique but the coding ability is weak. In the near future we will have better open source models. But it is possible to do for certain narrow domains.
Normally with the ChatGPT API you just feed API information or examples into the prompt. One version of GPT-4 has 32k context. The other has 8k and 3.5 has 16k now. So you can give it a lot of useful information and make it work quite a lot better for some specific task. When you pick something like React or Spring in general, depending on what you mean that might be huge amount of info to keep them current on. But if you narrow it down to a few modules then you can give them the latest API info etc.
Another option is now to feed ChatGPT a list of functions it can call with the arguments. It generally won't screw the actual function call part up, even with 3.5.
ChatGPT Plugins you can give an OpenAPI spec.
Then you implement the functions/API you give it. So they could be a wrapper for an existing library.