Yes. The 7B size can perform very well when tuned on a specific menu of tasks:
1. Goat: Fine-tuned LLaMA Outperforms GPT-4 on Arithmetic Tasks
https://huggingface.co/papers/2305.14201
2. Gorilla: Large Language Model Connected with Massive APIs
https://arxiv.org/abs/2305.15334
Yes. The 7B size can perform very well when tuned on a specific menu of tasks:
1. Goat: Fine-tuned LLaMA Outperforms GPT-4 on Arithmetic Tasks
https://huggingface.co/papers/2305.14201
2. Gorilla: Large Language Model Connected with Massive APIs
https://arxiv.org/abs/2305.15334