Hacker News new | past | comments | ask | show | jobs | submit login

What? That's not true at all. Citation needed!



"Huggingface isn't local, while Ollama is." - ms-menardi


It's just wrong and a lie. Huggingface is local.


I think you are arguing at cross-purposes. If you use HuggingFace to simply download models to run locally with ollama or llama.cpp, then you can say it is "local". But you can also use it as a service to run models (which is how they make money). Then they obviously aren't local.


Huggingface let's you run locally with huggingface exclusive code! They started this way and the managed/hosted solution came later.

You downvoters are the folks who jumped into this craze post ChatGPT.


I have no idea where people got the idea that Hugging Face isn’t local. I mean they show you how to run everything locally, with all the quantization strategies you could want, with far fewer bugs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: