Hacker News new | past | comments | ask | show | jobs | submit login

> a base for fine-tuning a task specific model

Yes. The 7B size can perform very well when tuned on a specific menu of tasks:

1. Goat: Fine-tuned LLaMA Outperforms GPT-4 on Arithmetic Tasks

https://huggingface.co/papers/2305.14201

2. Gorilla: Large Language Model Connected with Massive APIs

https://arxiv.org/abs/2305.15334




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: