It depends on your software selection. I run machines than i do updates on twice a year and thats all the maintenance they need. Less than a person-day per year!
I tried both Docker and Bubblewrap. Containers help with managing dependency creep, but if you avoid that in the first place, you don't need containers.
If you only run your own software and are conscious about your dependencies, then dependency creep is gonna be less of an issue, but don't forget that your dependencies often bring theirs too. And third-party software is very often much less mindful of it.
But even besides dependency creep, containers simplify your software env, simplify backups, migrations, permissions(although container are not replacement for secure isolation), file system access, network routing etc,etc,etc.
Containers also make performance tuning and optimization a lot harder, and increase resource consumption by a far greater degree than people are willing to admit. On a by-container basis it's not a lot, a few hundred megabytes here and there, but it builds up very rapidly.
This is compounded by containerizing making the ecosystem more complex, which creates a demand for additional containers to manage and monitor all the containers.
I freed up like 20 Gb of RAM by getting rid of everything container-related on my search engine server and switching to running everything on bare-metal debian.
> Containers of course add some overhead, however it is negligible in modern world.
I feel the people most enthusiastically trying to convince you of this are the infrastructure providers who also coincidentally bill you for every megabyte you use.
> I honestly don't know what you have to do to get 20 gigs of overhead with containers, dozens of full ubuntu containers with dev packages and stuff?
Maybe 5 Gb from the containers alone, they were pretty slim containers, but a few dozen of them in total; but I ran microk8s which was a handful of gigabytes, I could also get rid of kibana and grafana, which was a bunch more.
>I feel the people most enthusiastically trying to convince you of this are the infrastructure providers who also coincidentally bill you for every megabyte you use.
They don't though? Cloud providers sell VM tiers, not individual megabites, and even then on Linux there is barely any overhead for anything, but memory, and memory one is, again, if you optimize it far enough is negligible.
>but I ran microk8s which was a handful of gigabytes
> They don't though? Cloud providers sell VM tiers, not individual megabites, and even then on Linux there is barely any overhead for anything, but memory, and memory one is, again, if you optimize it far enough is negligible.
They don't necessarily bill by the RAM, but definitely network and disk I/O, which are also magnified quite significantly. Especially considering you require redundancy/HA when dealing with the distributed computing
(because of that ol' fly in the ointment
r n-r n!
p (1-p) -------
r!(n-r)!
)
> My k3s with a dozen of pods fits in couple gigs.
My Kibana instance alone could be characterized as "a couple of gigs". Though I produce quite a lot of logs. Although that is not a problem with logrotate and grep.
>They don't necessarily bill by the RAM, but definitely network and disk I/O, which are also magnified quite significantly. Especially considering you require redundancy/HA when dealing with the distributed computing
Network and Disk I/O come with basically 0 overhead with containers.
> Especially considering you require redundancy/HA when dealing with the distributed computing
Why? This is not an apples to apples to comparison then. If you don't need HA, just run a single container, there's plenty other benefits besides easier clustering.
>My Kibana instance alone could be characterized as "a couple of gigs". Though I produce quite a lot of logs. Although that is not a problem with logrotate and grep.
How is Kibana(!) relevant for container resource usage? Just don't use it and go with logrotate and grep? Even if you decide to go with a cluster, you can continue to use syslog and aggregate it :shrug:
I never said you did. I'm asking you to quantify the costs because all engineering and ops decisions involve tradeoffs. Just because something incurs overhead, or is on of several solutions, it doesn't mean that it's bad. It's just an additional factor for consideration. So what are the actual overhead costs?