Hacker News new | past | comments | ask | show | jobs | submit login

Lots. LLAMA 2 was trained on 4K context windows but can run on arbitrary length just the results become garbage as you go longer.

I refer you to https://blog.gopenai.com/how-to-speed-up-llms-and-use-100k-c... for an "easy" to digest summary




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: