Hacker News new | past | comments | ask | show | jobs | submit login
Runtipi: Docker-based home server management (runtipi.io)
201 points by ensocode 6 months ago | hide | past | favorite | 101 comments



This appears to suffer from the same mistake as many of these things do in this space: it focuses on making it really easy to run lots of software, but has a very poor story when it comes to making the data and time you put in safe across upgrades and issues. The only documented page on backing up requires taking the entire system down, and there appears to be no guidance or provision for safely handling software upgrades. This sets people up for the worst kind of self-hosting failure: they get all excited about setting up a bunch of potentially really useful applications, invest time and data into them, then get burned really badly when it all comes crashing down in an upgrade or hardware failure, with improper preparation. This is how people move back to saas and never look back again, it's utterly critical to get right, and completely missing here.


I'm working on something similar, and the crux of that issue is configurability vs automation. I.e. it's very easy to make backups for a system that users can't configure at all. You just ship the image with some rsync commands or something and done.

Once you start letting people edit the config files, you get into a spot where now you basically need to be able to parse the config files to read file paths. That often means making version-specific parsers for configuration options that are introduced or removed in some versions, or have differing defaults (i.e. in 1.2 if they don't set "storage_path" it's at /x/, but in 1.3 if they don't set it, it defaults to /y/).

That gets to be a lot of work.

Then it gets even worse when the users can edit the Docker config for the images, because all bets are off at that point. The user could do all kinds of weird, fucky shit that infinitely loops naive backup scripts because of host volume mounts or have one of the mounts actually be part of NFS mount on the host with like 200ms of latency so backups just hang for forever and etc.

It's just begging for an infinite series of bugs from people who did something weird with their config and ended up backing up their whole drive instead of the Docker folder, or removing their whole root drive because they mount their host FS root in the backups folder and it got deleted by the "old backup cleanup" script, or who knows what.

At some point, it's easier to just make your own setup where you define the limitations of that setup than it is to use someone else's setup but have to find the limitations on your own.


> Once you start letting people edit the config files,

That way lies madness...

What can be done instead is to provide your own unified (some schema validated) configuration options, and generate the app specific config files from your source of truth. Then you can know what the user can configure and how to back everything up (and how to do a lot of other things in automated fashion). And you also have a safe upgrade path if format of any underlaying config changes.


You can, but I find it very tedious. O2M or M2M relationships become cumbersome. Not because they're hard to use, but just because there's so fucking many of them.

I opted to store the app and version, and have a function that retrieves a config parser for it. I just didn't want to manage 300 tables just to get the schema to work.

That's also partially because I'm using Ent in Go, so it generates a struct for each table. It'd make my auto-complete useless, and that's not a price I'm willing to pay on a personal project lol


The one thing you have working on your favor is that the more likely someone is to want to customize, the more likely they are to understand they have to be responsible for backups and updates.

Is your project public? I'm working on one as well and would love to see what you have cooking.


It's not at the moment, but I wouldn't be opposed to chucking you some access if you wanted to look (or talk about joining, I'm solo at the moment).

The rough gist is that it's for hosting dedicated servers for video games, with an emphasis on having an easier to use UI than most providers combined with a usable API/CLI for advanced users. Ie. Instead of editing config files as regular text files, I want to offer a view with drop downs for each config option, a description of what each value does. (with some hidden for orchestration reasons, like the port it listens on)

Architecture-wise, it's a 3-tier app. HTTP API frontend talks to gRPC backend talks to Postgres. gRPC in that middle tier because I want to move the control plane for the agents into gRPC. Currently, the control plane consists of generating an Ansible inventory and executing it, and Ansible handles downloading the game if it doesn't exist, making dirs, SFTPing configs, install a Prometheus agent, etc.

It's not anything I intend to hyperscale, but I've always been the dedicated server guy among my friends, so I figured I might as well make a little thing out of it. I'd be happy with a small ARR.


That's exactly how I feel. Not to mention that I am always looking for how to monitor the system, and there is no uniform standard, if it is possible to monitor at all. As if there is a hidden message that apart from monitoring how much disk space is left or CPU everything else is irrelevant. But for such VMs that share many processes, it is exactly the opposite! I must know when there is a problem who exactly is responsible!


This is one of the reasons I groan every time there's a new selfhosting thing that uses Docker. It's easy to start because all the wiring is there and everyone already makes their apps available with a Dockerfile, but it's not a remotely good solution for selfhosting and people should stop trying to use it that way.


I think it's the easy start that's the problem.

Docker and Docker Compose is pretty good for self-hosting, except you have to understand enough to realize that it's not as simple as it sounds. You can easily lose data you didn't know you had if you're not paying attention.

"I'll just delete the container and restart" sounds like a great solution, but you just blew away your photos or whatever because you didn't know it created some volume for you.

I have a btrfs volume as the root of all the apps. Each application is in an individual subvolume so they can be snapshotted at upgrade time. The docker-compose.yml file points each app's volume to a relative directory inside that subvolume.

This way I can move them around individually, and all their data is right there. I can back them up pretty easily too.

Works for me and my use case, but you could never expect it to be turn key, and you couldn't hand it off to somebody non-technical.


A more detailed explanation from ten years ago: https://sandstorm.io/news/2014-08-19-why-not-run-docker-apps


In only look at selfhosting things that provide a docker image. This is how it should be: the internals are hidden from your and you just care about the data and the configuration.

If you want to use a monolith you will have conflicts sooner or later. Of you want to use multiple VMs you need to orchestrate them.

If you know how to do the above you are good to go to learn docker which is ultimately much simpler.


Unfortunately Sandstorm's approach, ie rewrite all the apps to fit a specific ecosystem, hasn't taken off either. I think docker with a little extra tooling for managing updates and backups is likely to be the solution eventually.


The problem is the Docker solution will never ever work and yet people are wasting thousands of person-hours trying, over and over in fact in dozens of redundant non-solutions. Sure Sandstorm or something like it will take a lot more effort, but at least it's moving in the correct direction.

Just because Docker selfhosting has "taken off" doesn't mean it isn't still a dead end.

EDIT: Also, to be clear, Docker is also a "specific ecosystem". It's very popular, it doesn't make it any more general than anything else.


Why would backups and restores never work though? The only thing you need to do is save your volumes/mounts essentially? The rest is a bunch of yaml in some docker-compose file for example. How is that "will never ever work"?


The issue is that there is a lot more to a home server than updates and backups, and even doing those correctly automatically is... pretty hard if you don't have very strong opinions about how apps need to support them.

So to give you an idea, Sandstorm requires: An app always must recover from forced termination (the only way it shuts apps down!) and that a given app carry all of the code it needs to update its data from any previous version. If you don't do these things, your homelabber will inevitably be fixing stuff themselves in various apps because their server lost power or the app developer demanded some bespoke intermediate inane update steps, or you missed a few versions while your server was off.

What percentage of apps made available via Docker meets those requirements? Probably a lot less than you'd think. (Most of them that do easily work on Sandstorm!) But if you want to make selfhosting work for non-sysadmins (or sysadmins who don't want to do a second job when they get home from their main job), you have to do these things.

Another thing is that you basically have to have security right because your users and even your server admin probably aren't experts, and definitely aren't paying for an enterprise-class firewall. So a big part of Sandstorm's opinionated app design is around making it fundamentally impossible for app vulnerabilities to exist. Things like proxying everything through the platform, isolating individual documents into their own app instances, handling authentication and authorization, etc. all become core to making apps secure enough to not have to worry about them that much.

This is especially important because a lot of selfhosted apps get abandoned and unmaintained, but selfhosters still want to use them. Of course, all these things fundamentally require changes to the app your random Docker container shipper can't insist on, and that's before we get even into performance concerns, which are key since selfhosters tend to use old or cheap hardware.


I wrote and use Harbormaster (https://harbormaster.readthedocs.io/) for this use case. It doesn't have a UI, but it basically only needs a Compose file to run an app, and all data (for all apps) lives in a data/ directory that you can back up at your leisure.

Everything is auto updated from a single YAML specification (which I usually pin), so the process to update something is "change the version in the config, commit, and the server will pick the change up and deploy it".

It's just a thin layer over "git pull; docker compose restart", but I love it.


Personal opinion, a home lab is the perfect place to learn how to actually configure things and properly set them up: no docker, no ansible, no salt… take off the training wheels and learn it. Then, learn to write your own playbooks, your own compose files, etc.

Additionally, if people think that learning how to configure and deploy stuff is too tedious and/or too difficult, write software that has better UI, not more layers of configuration and administration.

Final thought, git is better than ansible/salt/chef/puppet, and containers are silly.


"Containers are silly" might be the literal worst take I've seen in this space, especially if one is at all interested in "onboarding more moderately techy folk" to the idea of homelabs, and self-hosting, which I think is incredibly important.

Once you know Docker, you can abstract away SO MUCH and get things up and running incredibly quickly.

Let's say you have someone who now hates Spotify. Perfect. Before Docker it was a huge pain to try to set up one and hope you get it right.

After Docker? Just TRY ALL OF THEM. Don't like Navidrome/mstream/whatever? Okay, docker-compose down and try the next one.


TBH, containers are the wrong way of doing things, but they are the least bad solution of shipping software that we currently have.


This is the truth.

Containers as a bag that you put shitty software in to isolate it from the rest of your system and its peers...

The whole downstream, vs upstream, who should package what and where do you get software is showing us what the real problem is.

Software, should be easy to instal.

Containers are a terrible way to do that.


What's easier than containers? It sounds to me like you let your hate of the problem carry over to hating the solution.

I have a Python web app, what's a better way to distribute it than containers?


I basically agree with you, but it would be nice if containers were a bit simpler. Apptainer does some really nice things for example.


I like python, I write python. I think python is the high fructose corn surup of programing, its in everything, embedded, web, ML, data science ....

>> What's easier than containers?

Easy is the wrong metric. It is easy to not scoop your dogs poop when you take him on a walk. It is easy to drop your trash on the ground and not take it to a can... Easy is a BAD METRIC.

Python is one of the problems. your code + Runtime + random libs + venv. thats a lot of nonsense that could be a binary blob. Part of this is the fault of python (or js, or ruby or...) a lot of it is on us as software devs.

And micro services had a hand in this... Your N services dont run on N*X pieces of hardware. They run on far less and end up on virtual networks taking to each other over https when they could be using direct communication. It's even more laughable if they go out to a load balancer first.

Software packaging and delivery is a problem. We should figure out a few good ways to fix that.


Easy is the correct metric if, like me, you believe the solutions to our major problems in the software space involve not "trying to get the few companies to behave better," but "trying to get MORE PEOPLE to understand -- and put in practice -- that this doesn't have to be in the hands of a few companies."


You've conflated "easy" with "littering". The easiness of littering is an argument for littering, not against easiness.


So, there’s no better way.


As much as I like scripting languages, the fact that you cannot build a single statically-built executable is the main problem when it comes to distribution.

Go/Rust are miles ahead in that regard, and the only reason to use containers with them is the sandbox aspect.

(There are ways to create a self-contained executable including Python interpreter, code, and all dependencies, but it's far from ideal, and just going with Docker is a lot less problematic.)


Sure, but I have a Python web app.


> TBH, containers are the wrong way of doing things

And yet, people still use VMs because the majority of software isn't perfect and pollutes whatever system it's running on to some degree. Just earlier today I saw a friend look at an install script for some software, which was set on freely installing apt packages, replacing the installed Node version, purging the apt repositories on the system (literally "rm -rf /etc/apt/sources.list.d/*", wtf) and a lot of things that no decently written piece of software should do on its own, at least not without explicit confirmation, but you still sometimes see out in the wild. The friend was reviewing the script after running it and wondering why their system was trashed.

You often need some separation between what is your host system that needs to be pretty stable and the flaming mess that is whatever software that you'll install on it for productivity/entertainment/profit reasons, especially with easily configurable resource limits, custom port mappings, custom storage directories without access to the host system by default and eventually even horizontal scaling. Containers just happen to have really good DX around all of that and feel like the right way of doing things, in a flawed world.


In some cases, running docker containers using the vendor-provided docker compose file, inside a VM that you yourself provision according to your own needs using your own tooling (ansible etc), is the best approach, which at this point is ridiculous, but here we are.


Dependency hell and polluting the system IS the problem. Ideally, we would have statically built self-contained executables, and OS would offer sandboxing (process, network, filesystem) out of the box.

Containers at the moment are just lightweight VMs, and a better workaround to the problem (better especially in regard to UX; VMs isolation is still superior to cgroups/namespaces on Linux).


The Linux OS literally provides sandboxing, we call the set of technologies that provide this "containers". "Containers" are not lightweight VMs; VMs are a virtualized computer, containers are a virtualized kernel.

Statically linked everything is ridiculous and will never be the future of software deployment.


Vms and virtualization is amazing and I don't get anyone who hates it. Their entire purpose is to limit blast radius and that's like the most valuable thing for me when it comes to maintainability. No colliding libraries or dependencies, no broken installations or upgrades polluting my system, all for an honestly super minimal performance hit.

I feel like the same people complaining about containers today are the same type to complain about compilers back in the day


This is why you have integration engineers.


scratch containers aren't so bad


> get things up and running incredibly quickly

Optimizing for the wrong metric.


Accessibility is the metric in software adoption. Raspberry Pi and Arduino have made those computing form factors wildly more approachable for a general audience. Are they optimizing for the wrong metric?


Raspberry Pi and Arduino are focusing on learning which is the opposite of "getting things up and running incredibly quickly"


Underrated comment. So many orgs seem to spend a lot of time and effort on DX that makes onboarding, building, and deploying new software packages fast. But in most cases getting these pipelines set up was never a bottleneck, and automating their creation is a nearly NP-complete problem.

I think the reasoning behind these efforts is twofold: first, devops empire builders that mistake this for a good metric, and leadership that thinks it might let them get rid of ops ppl.


for YOU.

I teach IT. I'm trying to get people interested in empowering themselves, not necessarily create more cogs.


Then you should optimize for learning, not for "quick deploy and forget" which is precisely the pattern of "cogs".


username checks out

"move fast and break things" was always a short-sighted fad in the context of infrastructure


I think this is good advice for technical people, but I also think we need to drastically lower the barrier of entry for people who would benefit from owning their compute and data but don't have the skills or interest necessary.


I agree we need to do that, but I would argue at this point, the best option for the people without the skills/interest is probably buying a Synology. It would be awesome if there was a free OSS solution that people could just follow instructions and deploy on an inexpensive miniPC and not have to worry about anything and have auto-updates and zero problems, but I don't know if that is realistic. If we do ever see it, I feel confident it won't be free.

I see tools like this as a really good middle ground between piecing all the different parts together and managing runbooks and whatnot and buying an off-the-shelf appliance like a Synology or other consumer/prosumer/SMB NAS.


Sandstorm does this, although we are badly in need of funding, contributors, and all the things needed to meaningfully run an open source project. Our updates are generally evergreen but we are currently blocked on offering that while we establish a new community-run infrastructure and development process.

The problem is you do need to reinvent everything to do this right, and reinventing everything is hard and a bit of a slog. Some problems are also just... still hard, there's no secure and good way to help someone open a port on their home router.


Open ports? Nowadays people just use tailscale.. AFAIK


I mean I suppose it depends on how you want to limit your server setup, and what limitations you have (i.e. CGNAT). Considering I regularly share things on my server with others, and I let other people who are less technically inclined use my server, having to use a VPN service to access it would be extremely suboptimal for that. I also don't want to have to host a proxy server out on a public provider.


TrueNAS and Proxmox are easy too, I don't know what Synology runs as OS.


And that's basically defining protocols and let people build interfaces. But that goes against many companies' objectives. Everyone is trying to move you to their database and cloud computing. And most people prefer convenience over ownership or privacy. Installing Gitea on a VPS is basically a walk in the park. Not as easy as clicking a button in cPanel, but I think some things should be learned and not everything has to be "magic". You don't need to be a mechanic to drive a car, but some knowledge is required when someone is overselling engine oil.


Don’t do this to learn if you really want to use the servers for important data and expose them to the internet. You can shoot your foot with docker containers, but you can shoot your face by blindly installing dozens of packages and their dependencies on a single box.


You can keep them separate with Docker. I'm not sure what the workflow looks like, however: you make a simple image with just (say) alpine + apache, you run a shell there, set everything up, and when it works you basically try to replicate all the things you did manually in the Dockerfile? So in the end, you have to do everything twice?


Did that for years. Spend so much more time fiddling with computers than getting any actual benefit from them.

Now all my shit’s in docker, launched by one shell script per service. I let docker take care of restarts. There’s no mystery about where exactly all of the config and important files for any of the services lives.

I barely even have to care which distro I’m running it on. It’s great.

What’s funny is that the benefits of docker, for me, have little to do with its notable technical features.

1. It provides a common cross-distro interface for managing services.

2. It provides a cross-disro rolling-release package manager—that actually works pretty well and has well-maintained, and often official, packages.

3. It strongly encourages people building packages for that package manager to make their config more straightforward and to make locations of config files and data explicit and documented thoroughly in a single place. That is, it forces a lot of shitty software to de-shittify itself at the point I’ll interact with it (via Docker).


I personally like learning this way. Have a single server with at least 20 physical cores available. Use qemu to create VMs.

Personally, had nixOS (minimal) installed on the VMs. Scripted the setup (live cd created with my ssh key so I can remotely setup the VM such as disk partitioning). Then had a nix configuration to setup environment.

A bit of a learning curve but the benefit here is repeatable environments. Was even able to learn more about k8s using a small mini cluster. Host machine was the controller node while VMs were nodes.

By injecting latency between nodes to distance between different data center regions (ie, us-east vs us-west), actually able to reproduce some distributed app issues. All of this while not having to give up $$$ to the major server resellers (or "cloud" providers).

No worries about forgetting to tear down the cluster and receiving a surprise bill at the end of the month.


I always just use a systemd or open rc service to run my stuff, pretty much a shell script and voilá. As long as you know to set proper firewall rules and run things as non-root your pretty much ready.


Not only does using Docker avoid dependency hell (which is IMO the biggest problem when running different software on a single machine), it has good sandboxing capabilities (out-of-the-box, no messing with systemd-nspawn) and UX that's miles ahead of systemd.


There's a compromise to be made here. I use systemd to run podman containers! It's great, built-in to podman, and easy.

  podman generate systemd --new --files --name mypod
And now you have a bunch of systemd service files ready to copy over and load.


Definitely would recommend to use docker though. Can be just a bash script.

Running complex software with a lot of dependencies bare-metal is recipe for disaster nowadays, unfortunately.


I love containers - I have tiny RPi under the desk, run debian in it and everything is in containers - I don't have to deal that some software requires some version and other different. Or if something crashes it brings everything else with it.

I have some space to toy with it, but for the purpose of running something utterly low maintenance, docker and containers are awesome.


"Containers are silly"

What a silly take


umm, homelab is also the perfect place to play with docker, ansible, kube, etc. that's the whole point.

git and ansible are not mutually exclusive. containers are not silly.

you'll come around some day


In my experience, people that disparage containerization are either forgetting or weren’t around to know about how awful bespoke and project specific build and deploy practices actually were. Not that docker is perfect, but the concept of containerization is so much better than the alternative it’s actually kind of insane.


Exactly. If containerization solves one thing, it’s installing two pieces of software that e.g. want a different version of system Python installed. I love Debian but it won’t help you much here.


Docker has come a long way. There was a time when the documentation struggled to coherently describe what images were, and other tools were good-enough to not bother. Today there are imagines for most things, the documentation is clear, and support is widespread. But being able to setup a server on a Linux box is still a basic skill that any web administrator should know how to do. People make the mistake of thinking because they use Docker that they should use proprietary cloud hosting. My controversial belief is that the right solution for at least 90% of web services just slapping docker-compose on a VM, running SQLite in WAL backed up by Litestream, and maybe sync logs with Vector if you want to get real crazy. Run a production web service for $30/mo without cloud lock-in or bs. It's kind of funny I built a CD for AWS and it made me disillusioned with the whole thing. But yeah, Docker is good.


docker swarm is also a decent solution if you do need to distribute some workloads, while still using a docker compose file with a few extra tweaks. I use this to distribute compute intensive jobs across a few servers and it pretty much just works at this scale. The sharp edges I've come across are related to differences between the compose file versions supported by compose and swarm. Swarm continues to use Compose file version 3 which was used by Compose V1 [1].

1: https://docs.docker.com/engine/swarm/stack-deploy/


I've always wanted to get into using docker swarm for a homelab, since I love docker compose for dev/single node production deploys.

Any tips on the minimum hardware or VPS's needed to get a small swarm cluster setup?


> Any tips on the minimum hardware or VPS's needed to get a small swarm cluster setup?

From my testing, Docker Swarm is very lightweight, uses less memory than both Hashicorp Nomad and lightweight Kubernetes distros (like K3s). Most of the resource requirements will depend on what containers you actually want to run on the nodes.

You might build a cluster from a bunch of Raspberry Pis, some old OptiPlex boxes or laptops, or whatever you have laying around and it's mostly going to be okay. On a practical level, anything with 1-2 CPU cores and 4 GB of RAM will be okay for running any actually useful software, like a web server/reverse proxy, some databases (PostgreSQL/MySQL/MariaDB), as well as either something for a back end or some pre-packaged software, like Nextcloud.

So, even 5$/month VPSes are more than suitable, even from some of the more cheap hosts like Hetzner or Contabo (though the latter has a bad rep for limited/no support).

That said, you might also want to look at something like Portainer for a nice web based UI, for administering the cluster more easily, it really helps with discoverability and also gives you redeploy web hooks, to make CI easier: https://www.portainer.io/ (works for both Docker Swarm as well as Kubernetes, except the Kubernetes ingress control was a little bit clunky with Traefik instead of Nginx)


You can do Docker Swarm with a single node (both manager and worker). For a high availability setup, you need a majority of manager nodes for a quorum, so a minimum of three nodes to tolerate a single failure.

I actually just migrated my 4-node homelab from Docker Swarm to standalone instances. My nodes all have very different performance characteristics so I had every one of my services restricted to a specific node - in effect, not making use of most of the useful features of Swarm.

Some features of Swarm are nifty, but in particular I found that a) managing every service onto a single node is counter to the point of Swarm and b) I didn't like any of the options for storage. (1. Local storage, making containers even less portable across nodes. 2. Shared replicated storage, complicated. 3. Online file backend, expensive. 4. NFS shares, and then my NAS is a point of failure for every one of my nodes.)


I’m curious how UI impacts software configuration and deployment? Are you using a UI to set your compiler flags or something?


Configuration, deployment, program initialization/start is all still “user interface” just a different part. Most software feels like that part was largely an after thought and not part of the design process.


If that's your take, I'm sort of curious why you'd dislike containerization, whose primary selling point is to standardize and improve the developer experience for, literally exactly, configuration, deployment and program initialization...


Why would you write "your own compose files" if you're not running docker?


My point was, learn it first, then learn docker and Ansible and such.


> At its core, Tipi is designed to be easy to use and accessible for everyone

> This guide will help you install Tipi on your server. Make sure your server is secured and you have followed basic security practices before installing Tipi. (e.g. firewall, ssh keys, root access, etc.)

I love to see efforts like this, please keep it up.

But expecting users to learn everything necessary to run a secure server is simply not going to achieve the stated goal of being accessible to everyone.

We need something like an app that you can install on your laptop from the Windows store, with a quick OAuth flow to handle all networking through a service like Cloudflare Tunnel, and automatic updates and backups of all apps.


I'd argue the easiest way to achieve this is to refrain from opening any ports, and using Tailscale to get remote access.


I doubt that with level of accessibility that the GP suggest that would be easy. It would be easy to have integrated firewall management that just expose 443/80 ports for reverse proxy and handle communication with docker networks. Also it can help setup vpn server and disallow accessing the server except via approved client.

Someone suggested cosmos in the comment. I think this is the closest to what I am saying. However I am into self hosting for couple of years now with development experience so I would be biased. That would be probably different for average person without deep knowledge.


But then, your firewall or Cosmos is exposed to the internet waiting for a 0day to be released, and chances here they will not be updated as soon as it comes out.

VPN server is already what Tailscale does at this point. I'm not a shill by the way, just a regular user impressed by the ease of installation/use of their product.


Tailscale is awesome, but requiring anyone you want to share data or apps with to install Tailscale leaves a lot of simple interactions off the table.


If someone doesn’t want to learn how to secure a server, then they shouldn’t be self hosting anyway.

Or, as a car analogy: if someone doesn’t want to learn how to drive safely, they shouldn’t be driving anyway.


I toyed with the idea of creating something like that a year or so ago. I have a company that makes a tool which simplifies desktop development a ton, and that was previously the blocker to people trying to do this which is why there are so many products that claim to be targeted at everyone but start with a Linux CLI.

So, can you make it brainless? Sure. Writing a nice desktop GUI that spins up a VM, logs in and administers it for you is easy. But ... who will buy it?

The problem is that self-hosting isn't something that seems to solve a problem faced by non-technical people. Why do they want it? Privacy is not workable, because beyond most people just not caring, it's turtles all the way down: the moment you outsource administration of the service your data is accessible to those people. Whether that's a natty GUI or a cloud provider or a SaaS, unless it's a machine physically in your home someone can get at your data. And with supply chain attacks not even that is truly private really.

Cost is clearly not viable. Companies give SaaS away for free. Even when they charge, other corps would rather pay Microsoft to store all their supposedly super-confidential internal docs, chats and emails than administer their own email servers.

Usability: no. Big SaaS operations can invest in the best UI designers, so the self-hostable stuff is often derivative or behind.

What's left?


I think this is useful for less tech oriented people to get a basic homelab setup.

But I personally find it much more straightforward and maintainable to just use Compose). Virtually every service you would want to run has first-class support for Docker/Podman and Compose.


These services are cool but I almost always end up doing it myself anyways. Doing it yourself is more fun anyways.

Typically I just make my own one click deploys that fit my preferences. Not knowing how your container starts and runs is a recipe for disaster.


Anyone have any experience using this? I've been managing most of my homelab infrastructure with a combination of saltstack and docker compose files and I'm curious how this would stack up.


I used to run it and generally liked it, but eventually felt limited in the things I could do. At the time it was a hassle to run non-tipi apps behind the traefik instance and eventually I wanted SSO.

I ended up in a similar place with proxmox, docker-compose, and portainer but I have it on my backlog to try a competitor, Cosmos, which says many of the things I want to hear. User auth, bring your own apps and docker configs, etc.

https://github.com/azukaar/Cosmos-Server/


I tried it a few months and it was nice. But I think it lacks a way to configure mounting points for the apps storage.

By default, each app have its own storage folder and not really a useful default in the use case of a home lab : you probably want, idk, Syncthing, Nextcloud and Transmission to be able to access the same folders.

It’s doable but you have to edit the yaml files yourself which I thought removed most of the interest of the project.


Agreed. I struggled for days in trying to get an external drive mounted into Nextcloud.


Ran into the same problem with umbrel. Wanted to use photoprism for a gallery but nextcloud and syncthing to backup the photos. Was easier to just manage the containers myself.


I've been evaluating it alongside Cosmos and Umbrel, in addition to tools I've used before like CapRover. I like it but I don't have any strong feelings yet. I will probably do some sort of writeup after I do more evaluations and tests and play with more things but I haven't had the time to dedicate to it.

If you're already familiar with setting things up Salt/Ansible/whatever and Docker compose, you might not need something like this -- especially if you're already using a dashboard like Dashy or whatever.

The biggest thing is that these types of tools make it a lot easier to set things up -- there are inherent security risks too if you don't know what you are doing, though I argue this is a great way to learn (and it isn't a guarantee that simply knowing how to use Salt or Ansible or another infrastructure as code tool will mean any of the stuff you deploy is any more secure) and a good entryway for people who don't want to do the Synology thing.

I like these sorts of projects because even though I can do most of the stuff manually (I'd obv. automate using Ansible or something), I often don't want to if I just want to play with stuff on a box and the app store nature of these things is often preferable to finding the right docker image (or modifying it) for something. I'm lazy and I like turnkey solutions.


> there are inherent security risks too if you don't know what you are doing

It's actually worst. Even if you know what you are doing there is some amount of work and monitoring you need to do to just to follow basic guidelines [0].

What we would actually need is a "self-hosting management" platform or tool which at least help you manage basic security around what you run.

[0] https://cheatsheetseries.owasp.org/cheatsheets/Docker_Securi...


I’ve been using it for a few months on a raspberry pi 4. I installed (via runtipi) pi hole for dns, Netdata for monitoring and Tailscale for remote access. Works great for me and my family, I can stream videos from jellyfin for my kids when we are on the go and all the family devices use pihole.

I tried to do things myself in the past but this is so much easier if you don’t have particular needs.


Does anyone have any recommendations on top of this? I personally run portainer and would like more features like grouping containers post creation and container start order. I also have an issue where my VPN container if updated breaks all containers that depended on it. Portainer handles a lot, but I need the little bit more so I have to look at the panel less. I'm not sure if this would work for me since I build a lot of custom containers and this looks more like it's better for purpose built containers.


Umbrel, citadel, start9, MASH playbook.

Sorry, on mobile right now, but these are great alternative projects


I find terraform + acme provider + docker provider (w/ ssh uri) to be the best combo.

All my images live on a private GitLab registry, and terraform provisions them.

Keeping my infra up-to-date is as simple as "terraform plan -out infra.plan && terraform apply infra.plan" (yes, I know I shouldn't blindly accept, but it's my home lab and I'll accept if I want to).

Note: SSH access is only allowed from my IP address, and I have a one-liner that updates the allowed IP address in my infra's L3 firewall.


This looks great and I want something like this to run Vaultwarden, some 2FA manager, a media manager, Syncthing, Next cloud and so on.

However I'm very worried about vulnerabilities in one of the applications getting my entire machine pwned and thus leaking my Vaultwarden data. It feels like this is just one CVE and Docker privilege escape away.

What do others think about this? Am I being overly paranoid?


Yes; the framework itself doesn't really add vulnerability, if you were planning to run containers as root anyway.

However. I used it for a year and moved off it. It starts out as a propeller but becomes an anchor. Just build your setup in Ansible; the increased initial effort pays off quickly the moment you want to do something like run rootless containers.


for me it would be a lot of underlying complexity for "just" exposing docker-compose containers to the internet.

I don't really understand the target audience here. If you need to manage dns, server hardening, backups, upgrades, internet exposing and evaluating the risks behind that, you should be able to do the rest yourself too. And it is single server only


Is this dockerized alternative to https://yunohost.org/ ?


Quite a few out there, I guess. I always liked the idea of sandbox.io .


I've been testing Coolify [1] for this, so far the experience has been smooth as a self-hostable Vercel/Netlify/PaaS replacement.

Now onto finding/learning host management tools (Ansible, NixOS, Terraform and the like).

[1] https://github.com/coollabsio/coolify


The top section of the site matches very closely another project posted on HN a couple weeks ago - no negative opinion from me, because I also thought it was incredible design and borrowed a ton of it for one of my sites.


Which site did it take inspiration from? It looks great. I'm always amazed how people pull off minimalist designs. It just ends up looking empty and boring when I try.


How does this differ from Caprover?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: