I think you are arguing at cross-purposes. If you use HuggingFace to simply download models to run locally with ollama or llama.cpp, then you can say it is "local". But you can also use it as a service to run models (which is how they make money). Then they obviously aren't local.
I have no idea where people got the idea that Hugging Face isn’t local. I mean they show you how to run everything locally, with all the quantization strategies you could want, with far fewer bugs.