Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Layer-wise inferencing and batching: Small VRAM doesn't limit LLM throughput (verdagon.dev)
2 points by verdagon on May 15, 2024 | hide | past | favorite


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: