Hmm. I'm running decent LLMs locally (deepseek-coder:33b-instruct-q8_0, mistral:7b-instruct-v0.2-q8_0, mixtral:8x7b-instruct-v0.1-q4_0) on my MacBook Pro and they respond pretty quickly. At least for interactive use they are fine and comparable to Anthropic Opus in speed.
That MacBook has an M3 Max and 64GB RAM.
I'd say it does live up to my expectations, perhaps even slightly exceeds them.
That MacBook has an M3 Max and 64GB RAM.
I'd say it does live up to my expectations, perhaps even slightly exceeds them.