Hacker News new | past | comments | ask | show | jobs | submit login

Is there some documentation on how to run this setup?

How fast is your setup?




I'm doing this on a mac studio with 128gb too. I'm using llama.cpp.


Since you get GPU acceleration (because of the unified memory), I imagine this is probably much faster than the PC setup?

Edit: Seems some people are getting 1-2.6 tokens/sec on Ryzen (no GPU acceleration), Llama 70B quantized https://www.reddit.com/r/LocalLLaMA/comments/15rqkuw/llama_2...

Whereas Mac Studio gets 13 tokens/sec https://blog.gopenai.com/how-to-deploy-llama-2-as-api-on-mac...


Friendly internet stranger’s input:

- you don’t get GPU acceleration just by using unified memory. Llama.cpp still only uses the CPU on Apple Silicon chips.

- the difference in tokens/sec is likely attributable to memory bandwidth. Mac Studios with the base Max chip have 400 GB/s memory bandwidth compared to around 50 GB/s for the Ryzen 5000 series CPUs


Llama.cpp defaults to using metal. [0]

[0] https://github.com/ggerganov/llama.cpp#metal-build




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: