Hacker News new | past | comments | ask | show | jobs | submit login

I have already replaced Copilot with continue.dev+qwen2.5-coder:1.5b on Ollama and don't see myself coming back.

For the last year Copilot completions have been slow, unreliable and just bad, coming in at random moments, messing up syntax in dumb ways, being slow and sometimes not showing up at all. It has been painfully bad even though it used to be good. Maybe it's since they switched from Codex to the general GPTs ...




I've had the opposite experience - I tried continue.dev and for me it doesn't come close to copilot. Especially with copilot chat having o1-preview and Sonnet 3.5 for so cheap that I single handedly might bankrupt microsoft (we can hope), but I tried it before that was availlable and the inline completions were laughably bad in comparison.

I used the recommended models and couldn't figure it out, I assume I did something wrong but I followed the docs and triple checked everything. It'd be nice to use the GPU I have locally for faster completions/privacy, I just haven't found a way to do that.


The last couple times I tried "continue" it felt like "Step 1" in someone's business plan; bulky and seconds away from converting into a paid subscription model.

Additionally, I've tried a bunch of these (even the same models, etc) and they've all sucked compared to Copilot. And believe me, I want that local-hosted sweetness. Not sure what I'm doing wrong when others are so excited by it.


I just tried Continue and it was death by 1000 paper cuts. And by that I mean 1000 accept/reject blocks.

And at some point I asked to change a pretty large file in some way. It started processing, very very slowly and I couldn't figure out a way to stop it. Had to restart VS Code as it still kept changing the file 10 minutes later.

Copilot was also very slow when I tried it yesterday but at least there was a clear way to stop it.


Do you have a guide for how to set this up? I am also pretty dissatisfied with Copilot completions.


Here you go: https://docs.continue.dev/autocomplete/model-setup

The sibling comment also describes the process for chat, which I personally don’t care about.


Probably via Tabby (https://www.tabbyml.com/)


Tabby is great! Though they broke the vim plugin going for lsp support, but older version still works fine.


looks like the parent comment hints at https://docs.continue.dev/chat/model-setup#local-offline-exp...

(Assuming your computer has the specs necessary to run ollama)


That page is about chat while the parent comment seems to be about completions, that's the codepilot feature most of us will be looking to replace/improve on.


Here's the similar docs page for completions: https://docs.continue.dev/autocomplete/model-setup#local-off...



Also interested !


Same, tabbyML+llammacpp running StarCoder3B runs great, excellent completions for go, terraform, shell, nix




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: