> LLMs: Ollama for local models (also private models for now)
Incidentally, I decided to try to Ollama macOS app yesterday, and the first thing it tries to do upon launch is try to connect to some google domain. Not very private.
Yep, and I’ve noticed the same thing with in vscode with both the cline plugin and the copilot plugin.
I configure them both to use local ollama, block their outbound connections via little snitch, and they just flat out don’t work without the ability to phone home or posthog.
Super disappointing that Cline tries to do so much outbound comms, even after turning off telemetry in the settings.
Incidentally, I decided to try to Ollama macOS app yesterday, and the first thing it tries to do upon launch is try to connect to some google domain. Not very private.
https://imgur.com/a/7wVHnBA