> I decided to explore self-hosting some of my non-critical applications
Self-hosting static or almost-static websites is now really easy with a Cloudflare front. I just closed my account on SmugMug and published my images locally using my NAS; this costs no extra money (is basically free) since the photos were already on the NAS, and the NAS is already powered on 24-7.
The NAS I use is an Asustor so it's not really Linux and you can't install what you want on it, but it has Apache, Python and PHP with Sqlite extension, which is more than enough for basic websites.
Cloudflare free is like magic. Response times are near instantaneous and setup is minimal. You don't even have to configure an SSL certificate locally, it's all handled for you and works for wildcard subdomains.
And of course if one puts a real server behind it, like in the post, anything's possible.
do they ever publish an actual number on this? given the size of HTML documents v.s. images, I imagine its something thats something that can be exceeded very easily without knowing..
e.g. is running a personal photography website OK?
You could also use openVPN or wireguard and not have a man in the middle for no reason.
I have a VPN on a raspberry pi and with that I can connect to my self hosted cloud, dev/staging servers for projects, gitlab and etc when I’m not on my home network.
I believe the suggested setup was for making a site and images available to the public, for which hiding the origin behind Cloudflare seems a very good reason. Some public IP will need to have ports 443/80 open.
That requires opening a firewall port on router. For some people, that might not be possible. Either due to ISP restrictions such as CGNAT. In those cases, they're better off using something like Tailscale.
The web server of the nas is exposed to the Internet (port forwarding of 80 from the router to the nas); the rest of the nas is not exposed / not accessible from outside the LAN.
The images that are published are low-res versions copied to a directory on a partition accessible to the web server.
This is not the safest solution, as it does punch a hole in the lan... It's kind of an experiment... We'll see how it goes.
I'm aware they wrap OSS, but they made it very, very easy to adopt and maintain for a large chunk of potential users. This requires significant effort and should not be undervalued, in my opinion.
exactly, which means setting up a vps, generating certificates, setting up some type of monitoring to make sure the tunnel is working, etc. I agree that wireguard is the best option, if you have the time and knowledge, but for some dev people that just wants to put up a webpage with a few users, tailscale/cloudflare is a much easier system to maintain (especially as it handles ssl for you as well - to some degree...).
For the people who self-host LLMs at home: what use cases do you have?
Personally, I have some notes and bookmarks that I'd like to scrape, then have an LLM summarize, generate hierarchical tags, and store in a database. For the notes part at least, I wouldn't want to give them to another provider; even for the bookmarks, I wouldn't be comfortable passing my reading profile to anyone.
llama3.2 1b & 3b is really useful for quick tasks like creating some quick scripts from some text, then pasting them to execute as it's super fast & replaces a lot of temporary automation needs. If you don't feel like invest time into automation, sometimes you can just feed into an LLM.
This is one of the reason why recently I added floating chat to https://recurse.chat/ to quickly access local LLM.
Looks very nice, saved it for later. Last week, I worked on implementing always-on speech-to-text functionality for automating tasks. I've made significant progress, achieving decent accuracy, but I imposed some self-imposed constraints to implement certain parts from scratch to deliver a single binary deployable solution, which means I still have work to do (audio processing is new territory for me). However, I'm optimistic about its potential.
That being said, I think the more straightforward approach would be to utilize an existing library like https://github.com/collabora/WhisperLive/ within a Docker container. This way, you can call it via WebSocket and integrate it with my LLM, which could also serve as a nice feature in your product.
Thanks! lmk when/if you wanna give it a spin as free trial hasn't been updated with the latest but I'll try to do it this week.
I've actually been playing around with speech to text recently. Thank you for the pointer, docker is a bit too heavy to deploy for desktop app use case but it's good to know about the repo. Building binaries with Pyinstaller could be an option though.
Real time transcription seems a bit complicated as it involves VAD so a feasible path for me is to first ship simple transcription with whisper.cpp. large-v3-turbo looks fast enough :D
Can you list some real temporary automation needs you've fulfilled? The demo shows asking for facts about space. Lower param models seem to be not great as raw chat models, so I'm interested in what they are doing well for you in this context
Things like grab some markdown text and ask to make a pip/npm install one liner, or quick js scripts to paste in the console (which I didn't bother to open an editor), a fun use case was random drawing some lucky winners for the app giveaway from reddit usernames. Mostly it's converting unstructured text to short/one-liner executable scripts & doesn't require much intelligence. For more complex automation/scripts that I'll save for later, I do resort to providers (cursor w sonnet 3.5 mostly).
> So 8b is really smart enough to write scripts for you?
Depends on the model, but in general, no.
...but it's fine for simple 1 liner commands like "how do I revert my commit?" or "rename these files to camelcase".
> How often does it fail?
Immediately and constantly if you ask anything hard.
An 8b model is not chat-gpt. The 3B model in the OP post is not chat-gpt.
The capability compared to sonnet/4o is like a potato and a car.
Search for 'LLM Leaderboard' and you can see for yourself. The 8b models do not even rank. They're generally not capable enough to use as a self hosted assistant.
Well considering it probably takes several hundred GBs of VRAM to run inference for Claude its going to be a while.
But yes, like the guy above said it's really only helpful for one line commands. Like if I forgot some sort flag thats available for a certain type of command. Or random things I don't work with often enough to memorize their little build commands etc. It's not helpful for programming just simple commands.
It also can help with unstructured or messy data to make it more readable, although there's potential to hallucinate if the context is at all large.
> Search for 'LLM Leaderboard' and you can see for yourself. The 8b models do not even rank.
This is not true. On benchmarks, maybe, but I find the LLM Arena more accurately accounts for the subjective experience of using these things, and Llama 3.1 8B ranks relatively high, outperforming GPT-3.5 and certain iterations of 4.
Where the 8Bs do struggle is that they don't have as deep a repository of knowledge, so using them without some form of RAG won't get you as good results as using a plain larger model. But frankly I'm not convinced that RAG-free chat is the future anyway, and 8B models are extremely fast and cheap to run. Combined with good RAG they can do very well.
All I can say is my experience is that this is the difference between wanting something to be true, and it actually being true.
> 8B models are extremely fast and cheap to run
yes.
> Combined with good RAG they can do very well.
This is simply not true. They perform at a level which is useful for simple, trivial tasks.
If you consider that 'doing well', then sure.
However, if, like the parent post, you want to be writing scripts, which is specifically what they asked... then: heck, what 8B are you using, because llama 3.1 is shit at it out of the box.
¯\_(ツ)_/¯
A working unit test can take 6 or 7 iterations with a good prompt. Forget writing logic. Creating classes? Using RAG to execute functions from a spec? Forget it.
That's not not the level that I need for an assistant.
For me at least the biggest feature of some self hosted LLMs is that you can get it then to be “uncensored”, you can get them to tell you dirty jokes or have the bias removed with controversial and politically incorrect subjects. Basically you have a freedom you won’t get from most of the main providers.
And reliability. When Azure sends you the "censored output" status code it had basically failed and no retry is gonna help. And unless you are some corp you wont get approved for lifting the censoring.
I run Mistral Large on 2xA6000. 9 times out of 10 the response is the same quality as GPT 4o. My employer does not allow the use of GPT for privacy related reasons. So I just use a private Mistral for that
I mostly use it to write some quick scripts or generate texts if it follows some pattern. Also, getting it up running with LM studio is pretty straightforward.
I've been enjoying fine-tuning various models with various data, for example 17 years of my own tweets, and then just cranking up the temperature and letting the model generate random crap that cracks me up. Is that practical? Is joy practical? I think there's a place for it.
All the self-hosted LLM and text-to-image models come with some restrictions trained into them [1]. However there are plenty of people who have made uncensored "forks" of these models where the restrictions have been "trained away" (mostly by fine-tuning).
You can find plenty of uncensored LLM models here:
[1]: I personally suspect that many LLMs are still trained on WebText, derivatives of WebText, or using synthetic data generated by LLMs trained on WebText. This might be why they feel so "censored":
>WebText was generated by scraping only pages linked to by Reddit posts that had received at least three upvotes prior to December 2017. The corpus was subsequently cleaned
The implications of so much AI trained on content upvoted by 2015-2017 redditors is not talked about enough.
It has a sanitised output. You might want to look for "abliterated" models, where the general performance might drop a bit but the guard-rails have been diminished.
I’m curious about how good the performance with local LLMs is on ‘outdated’ hardware like the author’s 2060. I have a desktop with a 2070 super that it could be fun to turn into an “AI server” if I had the time…
I've been playing with some LLMs like Llama 3 and Gemma on my 2080Ti. If it fits in GPU memory the inference speed is quite decent.
However I've found quality of smaller models to be quite lacking. The Llama 3.2 3B for example is much worse than Gemma2 9B, which is the one I found performs best while fitting comfortably.
Actual sentences are fine, but it doesn't follow prompts as well and it doesn't "understand" the context very well.
Quantization brings down memory cost, but there seems to be a sharp decline below 5 bits for those I tried. So a larger but heavily quantized model usually performs worse, at least with the models I've tried so far.
So with only 6GB of GPU memory I think you either have to accept the hit on inference speed by only partially offloading, or accept fairly low model quality.
Doesn't mean the smaller models can't be useful, but don't expect ChatGPT 4o at home.
That said if you got a beefy CPU then it can be reasonable to have it do a few of the layers.
Personally I found Gemma2 9B quantized to 6 bit IIRC to be quite useful. YMMV.
If you want to set up an AI server for your own use, it's exceedingly easy to install LM Studio and hit the "serve an API" button.
Testing performance this way, I got about 0.5-1.5 tokens per second with an 8GB 4bit quantized model on an old DL360 rack-mount server with 192GB RAM and 2 E5-2670 CPUs. I got about 20-50 tokens per second on my laptop with a mobile RTX 4080.
I am using an old laptop with a GTX 1060 6 GB VRAM to run a home server with Ubuntu and Ollama. Because of quantization Ollama can run 7B/8B models on an 8 year old laptop GPU with 6 GB VRAM.
Last time I tried a local llm was about a year ago with a 2070S and 3950x and the performance was quite slow for anything beyond phi 3.5 and the small models quality feels worse than what some providers offer for cheap or free so it doesn't seem worth it with my current hardware.
Edit: I've loaded llama 3.1 8b instruct GGUF and I got 12.61 tok/sec and 80tok/sec for 3.2 3b.
Why disable LVM for a smoother reboot experience? For encryption I get it since you need a key to mount, but all my setups have LVM or ZFS and I'd say my reboots are smooth enough.
Coolify is quite nice, have been running some things with the v4 beta.
It reminds a bit of making web sites with a page builder. Easy to install and click around to get something running without thinking too much about it fairly quickly.
Problems are quite similar also, training wheels getting stuck in the woods more easily, hehe.
V4 beta is working well for me. Also the new core dev Coolify hired mentioned in a Tweet this week that they're fixing up lots of bugs to get ready for V4 stable.
Can you use a selfhosted LLM that fits in 12 GB VRAM as a reasonable substitute for copilot in VSCode? And if so, can you give it documentation and other code repositories to make it better at a particular language and platform?
Technically, yes, but will yield poor results. We did it internally at big corp n+1 and it, frankly, blows. Other than menial tasks, it's good for nothing but a scout badge.
Is that really that much worse than full copilot, though? When we tried it this past spring, it was really cool but not quite useful enough to actually stick with.
How is coolify different than ollama? is it better? worse? I like ollama because I can pull models and it exposes a rest api to me. which is great for development
Snark aside, even in Germany (where electricity is very expensive) it is more economical to self host than to pay for a subscription to any of the commercial providers.
I don’t know, it’s kind of amazing how good the lighter weight self hosted models are now.
Given a 16gb system with cpu inference only, I’m hosting gemma2 9b at q8 for llm tasks and SDXL turbo for image work and besides the memory usage creeping up for a second or so while i invoke a prompt, they’re basically undetectable in the background.
Self-hosting static or almost-static websites is now really easy with a Cloudflare front. I just closed my account on SmugMug and published my images locally using my NAS; this costs no extra money (is basically free) since the photos were already on the NAS, and the NAS is already powered on 24-7.
The NAS I use is an Asustor so it's not really Linux and you can't install what you want on it, but it has Apache, Python and PHP with Sqlite extension, which is more than enough for basic websites.
Cloudflare free is like magic. Response times are near instantaneous and setup is minimal. You don't even have to configure an SSL certificate locally, it's all handled for you and works for wildcard subdomains.
And of course if one puts a real server behind it, like in the post, anything's possible.