Hacker News new | past | comments | ask | show | jobs | submit login

It's really sad that Cursor does not support local models yet (afaiu they fetch the URL you provide from their server). Is there a VS Code plugin or other editor that does?

With models like CodeGemma and Command-R+ it makes more and more sense to run them locally.




https://github.com/huggingface/llm-vscode

  "llm.backend": "ollama",
  "llm.url": "http://localhost:11434/api/generate",
  "llm.modelId": "codegemma:2b-code-q8_0",
  "llm.configTemplate": "Custom",
  "llm.fillInTheMiddle.enabled": true,
  "llm.fillInTheMiddle.prefix": "<|fim_prefix|>",
  "llm.fillInTheMiddle.middle": "<|fim_middle|>",
  "llm.fillInTheMiddle.suffix": "<|fim_suffix|>",
  "llm.tokensToClear": ["<|fim_prefix|>", "<|fim_middle|>", "<|fim_suffix|>", "<|file_separator|>"],


I've been playing with Continue: https://github.com/continuedev/continue


ty for the pointer!


Cody supports local inference with Ollama for both Chat and Autocomplete. Here's how to set it up: https://sourcegraph.com/blog/local-chat-with-ollama-and-cody :)


ty for the pointer!


According to https://forum.cursor.sh/t/support-local-llms/1099/7 the Cursor servers do a lot of work in between your local computer and the model. So porting all that to work on users' laptops is going to take a while.





Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: