Hacker News new | past | comments | ask | show | jobs | submit login

You should definitely be able to run 7B at q6_k and that might be outperformed by 15b w/ a sub 4bpw imatrix quant, iQ3_M should fit into your vram. (i personally wouldn't bother with sub 4bpw quants on models < ~70b parameters)

Though if it all works great for you then no reason to mess with it, but if you want to tinker you can absolutely run larger models at smaller quant sizes, q6_k is basically indistinguishable from fp16 so there's no real downside.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: